Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Onnx Onnx ReferenceEvaluator Init

From Leeroopedia


Knowledge Sources
Domains Model_Evaluation, Testing
Last Updated 2026-02-10 00:00 GMT

Overview

Concrete tool for initializing the ONNX pure Python reference evaluator provided by the ONNX reference module.

Description

The ReferenceEvaluator class constructor accepts a model (as ModelProto, GraphProto, FunctionProto, NodeProto, file path, or bytes) and prepares it for execution. It resolves each operator node to a Python implementation class from onnx.reference.ops, loads initializers, and processes model-level functions. When optimized=True (default), it substitutes optimized variants of expensive operators (e.g., Conv uses im2col + Gemm instead of naive loops).

Usage

Import ReferenceEvaluator when you need to execute an ONNX model in pure Python for testing or debugging. The evaluator supports custom operator implementations via new_ops and verbose execution tracing via verbose.

Code Reference

Source Location

  • Repository: onnx
  • File: onnx/reference/reference_evaluator.py
  • Lines: 195-305

Signature

class ReferenceEvaluator:
    def __init__(
        self,
        proto: Any,
        opsets: dict[str, int] | None = None,
        functions: list[ReferenceEvaluator | FunctionProto] | None = None,
        verbose: int = 0,
        new_ops: list[type[OpRun]] | None = None,
        optimized: bool = True,
    ) -> None:
        """Initialize the reference evaluator.

        Args:
            proto: Model to evaluate. Accepts ModelProto, GraphProto,
                FunctionProto, NodeProto, file path (str), or bytes.
            opsets: Opset versions (required if proto is GraphProto).
            functions: Additional function definitions
                (required None if proto is ModelProto).
            verbose: Verbosity level (0=silent, 1+=show intermediates).
            new_ops: Custom operator implementations (list of OpRun subclasses).
            optimized: Use optimized operator kernels (default: True).
        """

Import

from onnx.reference import ReferenceEvaluator

I/O Contract

Inputs

Name Type Required Description
proto ModelProto or GraphProto or FunctionProto or NodeProto or str or bytes Yes Model/graph/function/node to evaluate
opsets dict[str, int] or None Conditional Required if proto is GraphProto
functions list or None No Additional function definitions
verbose int No Verbosity level (default: 0)
new_ops list[type[OpRun]] or None No Custom operator implementations
optimized bool No Use optimized kernels (default: True)

Outputs

Name Type Description
instance ReferenceEvaluator Initialized evaluator ready for inference via .run()

Usage Examples

Basic Initialization

import onnx
from onnx.reference import ReferenceEvaluator

# From ModelProto
model = onnx.load_model("model.onnx")
evaluator = ReferenceEvaluator(model)

# From file path
evaluator = ReferenceEvaluator("model.onnx")

With Verbose Output

from onnx.reference import ReferenceEvaluator

# Enable verbose to see intermediate results
evaluator = ReferenceEvaluator(model, verbose=2)

With Custom Operators

from onnx.reference import ReferenceEvaluator
from onnx.reference.op_run import OpRun

class MyCustomOp(OpRun):
    op_domain = "my.domain"

    def _run(self, X):
        return (X * 2,)

evaluator = ReferenceEvaluator(model, new_ops=[MyCustomOp])

Related Pages

Implements Principle

Requires Environment

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment