Implementation:Kornia Kornia ONNXSequential
Appearance
| Knowledge Sources | |
|---|---|
| Domains | ONNX, Deployment, Pipeline_Design |
| Last Updated | 2026-02-09 15:00 GMT |
Overview
Concrete tool for chaining ONNX models into a single inference pipeline provided by Kornia.
Description
ONNXSequential accepts ONNX models (as ModelProto objects, file paths, or HuggingFace Hub identifiers), combines their graphs with io_maps routing, and creates a single ONNX Runtime InferenceSession. It supports:
- Execution provider selection — CUDA, CPU, and others.
- Session option configuration — thread count, optimization level, etc.
- Automatic IR/opset version conversion — harmonize models with different ONNX versions.
The combined graph can be exported as a single ONNX file.
Usage
Initialize with models, io_maps, and providers. Call the instance with numpy arrays for inference.
Code Reference
| Repository | https://github.com/kornia/kornia |
|---|---|
| File | kornia/onnx/sequential.py
|
| Lines | L29–115 |
| Signature | str, providers: Optional[list[str]] = None, session_options: Optional[ort.SessionOptions] = None, io_maps: Optional[list[tuple[str, str]]] = None, cache_dir: Optional[str] = None, auto_ir_version_conversion: bool = False, target_ir_version: Optional[int] = None, target_opset_version: Optional[int] = None) |
| Import | from kornia.onnx import ONNXSequential
|
I/O Contract
Inputs
| Parameter | Type | Required | Description |
|---|---|---|---|
*args |
str | Yes | ONNX models or identifiers (paths, URLs, HF Hub) |
providers |
list[str] |
No | Execution providers (e.g., ["CUDAExecutionProvider"])
|
io_maps |
list[tuple[str, str]] |
No | I/O routing between consecutive models |
auto_ir_version_conversion |
bool |
No | Auto-convert IR versions across models |
session_options |
ort.SessionOptions |
No | ONNX Runtime session configuration |
cache_dir |
str |
No | Directory for caching downloaded models |
target_ir_version |
int |
No | Target IR version for conversion |
target_opset_version |
int |
No | Target opset version for conversion |
Outputs
Configured ONNXSequential instance with an active inference session.
Usage Examples
Creating pipeline from HuggingFace Hub models
from kornia.onnx import ONNXSequential
pipeline = ONNXSequential(
"hf://operators/resize",
"hf://operators/normalize",
"hf://operators/classifier",
io_maps=[("resized", "input"), ("normalized", "images")],
)
Pipeline with CUDA provider
from kornia.onnx import ONNXSequential
pipeline = ONNXSequential(
"hf://operators/resize",
"hf://operators/classifier",
providers=["CUDAExecutionProvider", "CPUExecutionProvider"],
io_maps=[("resized", "input_image")],
)
Related Pages
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment