Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Onnx Onnx ReferenceEvaluator Run

From Leeroopedia


Knowledge Sources
Domains Model_Evaluation, Inference
Last Updated 2026-02-10 00:00 GMT

Overview

Concrete tool for executing ONNX models using the pure Python reference evaluator provided by the ONNX reference module.

Description

The run method of ReferenceEvaluator executes the ONNX model by iterating through nodes in topological order. For each node, it collects inputs from previous results, invokes the Python operator implementation, and stores the outputs. The method supports requesting specific outputs by name, returning all intermediate results for debugging (via intermediate=True), and passing linked attributes for function evaluation.

Usage

Call run after initializing a ReferenceEvaluator instance. Pass a dictionary of input names to NumPy arrays as the feed dictionary. The method returns a list of NumPy arrays corresponding to the requested output names.

Code Reference

Source Location

  • Repository: onnx
  • File: onnx/reference/reference_evaluator.py
  • Lines: 543-610

Signature

def run(
    self,
    output_names,
    feed_inputs: dict[str, Any],
    attributes: dict[str, Any] | None = None,
    intermediate: bool = False,
) -> dict[str, Any] | list[Any]:
    """Executes the ONNX model.

    Args:
        output_names: Requested outputs by name, or None for all outputs.
        feed_inputs: Dictionary of {input_name: numpy_array}.
        attributes: Attribute values for FunctionProto evaluation.
        intermediate: If True, return all results (including intermediates)
            as a dictionary; if False, return only final outputs as a list.

    Returns:
        list[Any] of output arrays when intermediate=False,
        dict[str, Any] of all results when intermediate=True.
    """

Import

from onnx.reference import ReferenceEvaluator

I/O Contract

Inputs

Name Type Required Description
output_names list[str] or None Yes Output names to return (None for all graph outputs)
feed_inputs dict[str, Any] Yes Input name to NumPy array mapping
attributes dict[str, Any] or None No Attribute values for FunctionProto
intermediate bool No Return all intermediate results (default: False)

Outputs

Name Type Description
return (intermediate=False) list[Any] List of NumPy arrays, one per requested output
return (intermediate=True) dict[str, Any] Dictionary of all tensor names to their values

Usage Examples

Basic Inference

import numpy as np
from onnx.reference import ReferenceEvaluator

evaluator = ReferenceEvaluator("model.onnx")

# Prepare input data
X = np.random.randn(1, 3, 224, 224).astype(np.float32)

# Run inference
outputs = evaluator.run(None, {"input": X})
print(f"Output shape: {outputs[0].shape}")

Debugging with Intermediate Results

import numpy as np
from onnx.reference import ReferenceEvaluator

evaluator = ReferenceEvaluator("model.onnx", verbose=1)

X = np.array([[1.0, -2.0, 3.0]], dtype=np.float32)
results = evaluator.run(None, {"X": X}, intermediate=True)

# Inspect all intermediate tensor values
for name, value in results.items():
    if value is not None:
        print(f"{name}: {value}")

Request Specific Outputs

import numpy as np
from onnx.reference import ReferenceEvaluator

evaluator = ReferenceEvaluator(model)

X = np.random.randn(1, 784).astype(np.float32)

# Only request specific outputs
logits = evaluator.run(["logits"], {"input": X})
print(f"Logits: {logits[0]}")

Related Pages

Implements Principle

Requires Environment

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment