Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Onnx Onnx Numpy Assert Allclose

From Leeroopedia


Knowledge Sources
Domains Testing, Numerical_Validation
Last Updated 2026-02-10 00:00 GMT

Overview

External tool for numerically comparing arrays within specified tolerances provided by the NumPy testing module.

Description

numpy.testing.assert_allclose compares two arrays element-wise and raises AssertionError if any element differs by more than the combined absolute and relative tolerance. This is the standard tool for validating the numerical correctness of ONNX model outputs in the ONNX test suite and in user testing workflows. It provides detailed error messages showing the location, actual values, desired values, and the magnitude of discrepancies.

This is an External Tool Doc: the function comes from NumPy, not from the ONNX repository.

Usage

Use this function after running model inference via ReferenceEvaluator.run or any ONNX runtime. Compare computed outputs against known reference values. Adjust rtol and atol based on precision requirements.

Code Reference

Source Location

  • Repository: External (NumPy)
  • Package: numpy

Signature

numpy.testing.assert_allclose(
    actual,
    desired,
    rtol=1e-7,
    atol=0,
    equal_nan=True,
    err_msg="",
    verbose=True,
)

Import

import numpy as np
# or: from numpy.testing import assert_allclose

I/O Contract

Inputs

Name Type Required Description
actual np.ndarray Yes Computed output from model inference
desired np.ndarray Yes Expected reference output
rtol float No Relative tolerance (default: 1e-7)
atol float No Absolute tolerance (default: 0)
equal_nan bool No Whether to treat NaN as equal (default: True)

Outputs

Name Type Description
return None Returns nothing on success; raises AssertionError on mismatch

Usage Examples

Validate Model Outputs

import numpy as np
from onnx.reference import ReferenceEvaluator

evaluator = ReferenceEvaluator("model.onnx")
X = np.random.randn(1, 10).astype(np.float32)

# Run inference
outputs = evaluator.run(None, {"input": X})

# Compare against expected results
expected = np.array([[0.1, 0.9]], dtype=np.float32)
np.testing.assert_allclose(outputs[0], expected, rtol=1e-5, atol=1e-6)

Cross-runtime Comparison

import numpy as np

# Compare reference evaluator output vs another runtime
ref_output = ref_evaluator.run(None, {"input": X})[0]
opt_output = optimized_runtime.run(None, {"input": X})[0]

np.testing.assert_allclose(
    opt_output, ref_output,
    rtol=1e-5, atol=1e-6,
    err_msg="Optimized runtime output differs from reference",
)

Related Pages

Implements Principle

Requires Environment

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment