Environment:Roboflow Rf detr ONNX Export Environment
| Knowledge Sources | |
|---|---|
| Domains | Infrastructure, Deployment, Computer_Vision |
| Last Updated | 2026-02-08 15:00 GMT |
Overview
Optional dependency environment for ONNX model export, simplification, and TensorRT engine building. Extends the core Python GPU environment with ONNX toolchain packages.
Description
This environment adds the ONNX export toolchain on top of the core RF-DETR environment. It includes the ONNX library for model serialization, onnxsim for graph simplification, onnx-graphsurgeon for graph manipulation, and ONNX Runtime for validation. For TensorRT deployment, additional packages (pycuda, tensorrt, polygraphy) are needed from the deploy requirements.
Usage
Use this environment when you need to export an RF-DETR model to ONNX format for deployment, simplify the ONNX graph for optimization, or validate the exported model with ONNX Runtime. It is activated by `pip install rfdetr[onnxexport]` and is required by the ONNX Export workflow.
System Requirements
| Category | Requirement | Notes |
|---|---|---|
| OS | Linux (recommended) | ONNX export uses CPU; TensorRT requires Linux + NVIDIA GPU |
| Hardware | NVIDIA GPU (for TensorRT) | ONNX export itself runs on CPU; validation can use GPU |
| Disk | 1GB+ free space | ONNX models can be 100MB-500MB depending on model size |
Dependencies
Python Packages (via `rfdetr[onnxexport]`)
- `onnx` (any version)
- `onnxsim` (any version)
- `onnx_graphsurgeon` (any version)
- `onnxruntime` (any version)
Additional Packages (via `rfdetr/deploy/requirements.txt` for TensorRT)
- `pycuda` (any version)
- `onnxruntime-gpu` (any version)
- `tensorrt` >= 8.6.1
- `polygraphy` (any version)
Credentials
No credentials are required for ONNX export.
Quick Install
# Install ONNX export dependencies
pip install "rfdetr[onnxexport]"
# For full TensorRT deployment (advanced)
pip install pycuda onnxruntime-gpu "tensorrt>=8.6.1" polygraphy
Code Evidence
Import guard for ONNX dependencies from `rfdetr/main.py:557-561`:
try:
from rfdetr.deploy.export import export_onnx, make_infer_image, onnx_simplify
except ImportError:
print("It seems some dependencies for ONNX export are missing. "
"Please run `pip install rfdetr[onnxexport]` and try again.")
raise
ONNX export uses opset version 17 by default from `rfdetr/deploy/export.py:64`:
def export_onnx(output_dir, model, input_names, input_tensors, output_names,
dynamic_axes, backbone_only=False, verbose=True, opset_version=17):
Shape validation for export from `rfdetr/main.py:573-574`:
if shape[0] % 14 != 0 or shape[1] % 14 != 0:
raise ValueError("Shape must be divisible by 14")
Batch size forced to 1 for ONNX export from `rfdetr/main.py:705-708`:
if args.batch_size != 1:
config['batch_size'] = 1
print(f"Only batch_size 1 is supported for onnx export, "
"but got batchsize = {args.batch_size}. batch_size is forcibly set to 1.")
Common Errors
| Error Message | Cause | Solution |
|---|---|---|
| `ImportError: No module named 'onnx'` | ONNX export dependencies not installed | Run `pip install "rfdetr[onnxexport]"` |
| `RuntimeError: Failed to simplify ONNX model` | ONNX simplification check failed | Try export without `simplify=True` or update onnxsim |
| `ValueError: Shape must be divisible by 14` | Custom export shape not compatible with patch size | Ensure both height and width are divisible by 14 |
| Batch size forced to 1 | ONNX export only supports batch_size=1 | This is expected; dynamic batching not yet supported |
Compatibility Notes
- ONNX Export: Runs on CPU (model is moved to CPU during export). CUDA is not required for export itself.
- TensorRT: Requires Linux with NVIDIA GPU and CUDA. TensorRT >= 8.6.1 needed.
- ONNX Opset: Default opset version is 17. Older opset versions may not support all operations.