Implementation:Microsoft Onnxruntime SessionOptions Init
Appearance
Metadata
| Field | Value |
|---|---|
| Implementation Name | SessionOptions_Init |
| Repository | Microsoft_Onnxruntime |
| Source Repository | https://github.com/microsoft/onnxruntime |
| Type | API Doc |
| Language | Python |
| Domain | ML_Inference, Model_Optimization |
| Last Updated | 2026-02-10 |
| Workflow | Python_Inference_Pipeline |
| Pair | 1 of 6 |
Overview
API documentation for the onnxruntime.SessionOptions() constructor, which creates a configuration object for tuning ONNX Runtime inference session behavior.
API Signature
onnxruntime.SessionOptions()
Import
from onnxruntime import SessionOptions, GraphOptimizationLevel
Code Reference
| Reference | Location |
|---|---|
| Import definition | onnxruntime/__init__.py:L51 |
| Usage example | docs/python/examples/plot_profiling.py:L49-51 |
I/O Contract
Inputs
| Parameter | Type | Required | Description |
|---|---|---|---|
| (none) | -- | -- | The constructor takes no arguments. |
Outputs
| Output | Type | Description |
|---|---|---|
| return value | SessionOptions |
A configured SessionOptions object with default settings. |
Configurable Properties
| Property | Type | Default | Description |
|---|---|---|---|
enable_profiling |
bool |
False |
Enables performance profiling for the session. |
graph_optimization_level |
GraphOptimizationLevel |
ORT_ENABLE_BASIC |
Controls which graph optimization passes are applied. |
intra_op_num_threads |
int |
0 (auto) | Number of threads for intra-operator parallelism. |
inter_op_num_threads |
int |
0 (auto) | Number of threads for inter-operator parallelism. |
execution_mode |
ExecutionMode |
Sequential | Sequential or parallel operator execution. |
enable_mem_pattern |
bool |
True |
Enables memory pattern optimization. |
Usage Example
import onnxruntime as rt
# Create a SessionOptions object
options = rt.SessionOptions()
# Enable profiling
options.enable_profiling = True
# Set graph optimization to maximum
options.graph_optimization_level = rt.GraphOptimizationLevel.ORT_ENABLE_ALL
# Pass options to InferenceSession
sess = rt.InferenceSession("model.onnx", options, providers=rt.get_available_providers())
The above pattern is derived from the profiling example at docs/python/examples/plot_profiling.py:L49-51:
options = rt.SessionOptions()
options.enable_profiling = True
sess_profile = rt.InferenceSession(onnx_model_str, options, providers=rt.get_available_providers())
Related Pages
- Principle:Microsoft_Onnxruntime_Session_Options_Configuration
- Implementation:Microsoft_Onnxruntime_InferenceSession_Init
- Environment:Microsoft_Onnxruntime_Python_Inference_Environment
- Heuristic:Microsoft_Onnxruntime_Graph_Optimization_Level_Selection
- Heuristic:Microsoft_Onnxruntime_Threading_Configuration_Tips
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment