Implementation:Kornia Kornia ONNXSequential Call
Appearance
| Knowledge Sources | |
|---|---|
| Domains | ONNX, Deployment, Inference |
| Last Updated | 2026-02-09 15:00 GMT |
Overview
Concrete tool for running ONNX sequential pipeline inference provided by Kornia's ONNXRuntimeMixin.
Description
ONNXSequential inherits __call__ from ONNXRuntimeMixin. The method accepts numpy arrays as positional arguments, maps them to the model's input names, executes the ONNX Runtime session, and returns a list of numpy output arrays. It handles input validation and output collection.
Usage
Call the ONNXSequential instance directly with numpy input arrays.
Code Reference
| Repository | https://github.com/kornia/kornia |
|---|---|
| File | kornia/core/mixin/onnx.py
|
| Lines | L274–288 |
| Signature | def __call__(self, *inputs: np.ndarray) -> list[np.ndarray]
|
| Import | from kornia.onnx import ONNXSequential (call via instance)
|
I/O Contract
Inputs
| Parameter | Type | Required | Description |
|---|---|---|---|
*inputs |
np.ndarray |
Yes | Numpy arrays matching model input shapes and dtypes |
Outputs
list[np.ndarray] — model outputs as numpy arrays.
Usage Examples
Running inference on preprocessed numpy data
import numpy as np
from kornia.onnx import ONNXSequential
# Construct pipeline
pipeline = ONNXSequential(
"hf://operators/resize",
"hf://operators/classifier",
io_maps=[("resized", "input_image")],
)
# Prepare input as numpy array (batch, channels, height, width)
input_data = np.random.randn(1, 3, 224, 224).astype(np.float32)
# Run inference
outputs = pipeline(input_data)
print(outputs[0].shape) # e.g., (1, 1000)
Related Pages
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment