Environment:Dotnet Machinelearning ONNX Runtime Environment
| Knowledge Sources | |
|---|---|
| Domains | Infrastructure, Model_Interop |
| Last Updated | 2026-02-09 11:00 GMT |
Overview
ONNX Runtime 1.23.2 managed environment for scoring pre-trained ONNX models within ML.NET pipelines, with optional GPU acceleration.
Description
This environment provides the ONNX Runtime integration for ML.NET. It uses the managed ONNX Runtime package (`Microsoft.ML.OnnxRuntime.Managed`) version 1.23.2 to score ONNX models. GPU acceleration is optional with fallback to CPU. The ONNX model inputs and outputs must be Tensor types; Sequence and Map types are not yet supported.
Usage
Use this environment when scoring pre-trained ONNX models or converting ML.NET models to ONNX format using the `ApplyOnnxModel` transformer or `OnnxConverter`.
System Requirements
| Category | Requirement | Notes |
|---|---|---|
| OS | Windows, Linux (libc >= 2.23), macOS | Cross-platform |
| Hardware | x64 CPU (primary), optional GPU | GPU requires separate ONNX Runtime GPU package |
| Disk | 200MB+ | ONNX Runtime binaries and model files |
Dependencies
NuGet Packages
- `Microsoft.ML.OnnxRuntime.Managed` = 1.23.2
- `Google.Protobuf` >= 3.30.2
System Requirements
- Linux: `libc` >= 2.23
- Protobuf recursion limit: 100 (configurable for deeply nested models)
Credentials
No credentials required for ONNX Runtime usage.
Quick Install
# Via NuGet (in .csproj)
dotnet add package Microsoft.ML.OnnxTransformer
# For GPU support (optional)
dotnet add package Microsoft.ML.OnnxRuntime.Gpu
Code Evidence
ONNX Runtime version from `eng/Versions.props:47`:
<MicrosoftMLOnnxRuntimeVersion>1.23.2</MicrosoftMLOnnxRuntimeVersion>
Managed package reference from `src/Microsoft.ML.OnnxTransformer/Microsoft.ML.OnnxTransformer.csproj:16`:
<PackageReference Include="Microsoft.ML.OnnxRuntime.Managed" />
Tensor-only type constraint from `src/Microsoft.ML.OnnxTransformer/OnnxTransform.cs:948`:
/// The inputs and outputs of the ONNX models must be Tensor type.
/// Sequence and Maps are not yet supported.
Protobuf recursion limit from `src/Microsoft.ML.OnnxTransformer/OnnxOptions.cs:45`:
/// Protobuf CodedInputStream recursion limit.
GPU fallback support from `src/Microsoft.ML.OnnxTransformer/OnnxCatalog.cs`:
// GPU device ID parameter support for ONNX
// Optional fallback to CPU on GPU error
Common Errors
| Error Message | Cause | Solution |
|---|---|---|
| `DllNotFoundException: onnxruntime` | ONNX Runtime native binary not found | Ensure `Microsoft.ML.OnnxRuntime.Managed` NuGet package is installed |
| `GLIBC_2.23 not found` | Linux libc version below 2.23 | Upgrade to Ubuntu 16.04+ or equivalent |
| `OnnxRuntimeException: Invalid model` | Tensor type mismatch or unsupported type | Ensure model uses only Tensor inputs/outputs (no Sequence or Map) |
| `ProtobufException: recursion limit exceeded` | Deeply nested ONNX model graph | Increase Protobuf recursion limit in OnnxOptions |
Compatibility Notes
- ARM64: ONNX Runtime supports inference only on ARM64; training features are x64 only.
- Blazor WASM: ONNX Runtime is not supported in WebAssembly environments.
- GPU: Requires separate `Microsoft.ML.OnnxRuntime.Gpu` package; falls back to CPU on error.
- ONNX Functions: Sub-graph functions have restricted scope; do not use in main graph context.