Implementation:Alibaba MNN TestMNNFromOnnx
| Field | Value |
|---|---|
| Implementation Name | TestMNNFromOnnx |
| Type | API Doc |
| Category | Model_Conversion_Pipeline |
| Source | tools/script/testMNNFromOnnx.py:L107-279
|
| External Dependencies | onnx, onnxruntime, numpy, MNNConvert (compiled binary or pymnn) |
Summary
testMNNFromOnnx.py is a Python test harness that automates the end-to-end verification of ONNX-to-MNN model conversion. It loads an ONNX model, generates random test inputs, runs inference using ONNX Runtime to produce reference outputs, converts the model via MNNConvert, runs inference on the converted model, and compares the results for numerical correctness.
The script also includes a binary search debug mode that can automatically locate the first operator producing incorrect results by using a dominator-tree-based search algorithm.
API
python testMNNFromOnnx.py <model_path> [layer_name | DEBUG]
Class: TestModel
The main class is TestModel (defined at tools/script/testMNNFromOnnx.py:L107):
class TestModel():
def __init__(self, modelName)
def Test(self) -> str
def Debug(self)
def TestName(self, name)
Constructor: __init__(self, modelName)
- modelName (
str) -- Path to the ONNX model file (.onnx) - Copies the model to a local
onnx/working directory asonnx/test.onnx - Loads the ONNX model using
onnx.load()and extracts output names
Method: Test(self) -> str
Runs the full verification pipeline:
- Calls
__run_onnx()to generate test data using ONNX Runtime - Calls
__run_mnn()to convert and test using MNNConvert - Returns the MNNConvert output string (check for
"TEST_SUCCESS")
Method: Debug(self)
Runs the full test and, if it fails, activates binary search debugging:
- Builds the ONNX graph's dominator tree using the Lengauer-Tarjan algorithm (
IDominateclass) - Performs binary search on the dominator path to locate the first failing operator
- Prints the name of the first error node
Method: TestName(self, name)
Tests a specific output node (or list of nodes) by name:
- name (
strorlist[str]) -- Name(s) of intermediate outputs to test - Modifies the ONNX model to expose the specified node as an output, then runs the verification
Key Parameters
| Parameter | Type | Description |
|---|---|---|
| model_path (positional) | str |
Path to the ONNX model file to test |
| layer_name (positional, optional) | str or "DEBUG" |
Specific layer name(s) to test, or DEBUG for automatic error localization
|
| --thredhold (internal) | float |
Maximum relative error rate, default 0.01 (passed to MNNConvert's --testdir mechanism)
|
Inputs
- ONNX model file (
.onnx) -- The model to convert and verify - MNNConvert binary -- Must be available in the current directory (as
./MNNConvertorMNNConvert.exe) or asmnnconvertvia pymnn
Outputs
- Pass/fail result -- Printed to stdout.
TEST_SUCCESSindicates the conversion is numerically correct. - Error metrics -- On failure, prints
TESTERRORmessages withabsMaxV(reference max value) andDiffMax(maximum absolute difference) - Debug output -- In DEBUG mode, prints the first operator producing incorrect results
- Error files -- On failure, MNN saves results to a
.error/directory and an.Error.mnnfile for inspection
Internal Workflow
Test Data Generation (__run_onnx)
# tools/script/testMNNFromOnnx.py:L129-183
def __run_onnx(self):
# 1. Create ONNX Runtime session
ort_session = ort.InferenceSession(self.modelName)
# 2. Generate random inputs based on model input specs
for inputVar in ort_session.get_inputs():
# Handle dynamic dimensions (replace strings with 1)
# Generate random data with appropriate dtype:
# - int64: uniform(0, 12)
# - int32: uniform(0, 12)
# - bool: uniform(0, 1)
# - float32: uniform(0.1, 1.2)
# - float16: cast from float32
# 3. Write input.json metadata file
# 4. Write input tensor data as flattened text files
# 5. Run ONNX Runtime inference
# 6. Write reference output tensor data as flattened text files
MNN Conversion and Test (__run_mnn)
# tools/script/testMNNFromOnnx.py:L120-128
def __run_mnn(self):
convert = './MNNConvert -f ONNX --bizCode MNN --modelFile onnx/test.onnx ' \
'--MNNModel convert_cache.mnn --keepInputFormat=1 --testdir onnx'
result = os.popen(convert).read()
return result
This invokes MNNConvert with the --testdir option, which triggers MNNConvert's built-in Cli::testconvert() method to compare outputs using the test data generated by __run_onnx().
Binary Search Debug (IDominate class)
The IDominate class (at tools/script/testMNNFromOnnx.py:L22-105) implements the Lengauer-Tarjan algorithm for computing the immediate dominator tree of the ONNX computation graph. This is used by the Debug() method to efficiently locate conversion errors:
class IDominate:
def __init__(self, n) # Initialize for graph with n nodes
def addEdge(self, _from, _to) # Add directed edge
def getIDoms(self) -> list # Compute and return immediate dominators
The binary search process (in __binary_search at L246-266):
- Compute the dominator path from the first node to the failing output
- Binary search: test the midpoint node's output
- If midpoint is correct, search the right half; otherwise search the left half
- When narrowed to adjacent nodes, attempt sub-graph search via
__sub_graph
Usage Examples
Basic Conversion Test
cd /path/to/MNN/build
python ../tools/script/testMNNFromOnnx.py /path/to/model.onnx
Test with Automatic Debug
python ../tools/script/testMNNFromOnnx.py /path/to/model.onnx DEBUG
Test Specific Layer Output
python ../tools/script/testMNNFromOnnx.py /path/to/model.onnx layer_name_1 layer_name_2
Error Comparison Logic
The comparison logic is implemented in Cli::testconvert() (in cli.cpp:L864-1085) and compareOutput() (in cli.cpp:L792-862). For each output tensor:
- Load reference output from text file
- Convert MNN output from NC4HW4 to the model's default format if needed
- Cast non-float outputs to float
- Compute
absMaxV = max(|reference|), clamped to at least 0.0001 - Compute
DiffMax = max(|reference - mnn_output|) - Check for infinity/NaN in MNN output (immediate failure)
- Pass if
DiffMax < absMaxV * threshold