Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Roboflow Rf detr ORT InferenceSession

From Leeroopedia


Knowledge Sources
Domains Deployment, Inference
Last Updated 2026-02-08 15:00 GMT

Overview

External tool documentation for validating and running inference on exported RF-DETR ONNX models using ONNX Runtime.

Description

onnxruntime.InferenceSession loads an ONNX model and provides a run() method for inference. It automatically selects the best execution provider (CUDA GPU, CPU) and applies runtime graph optimizations. For RF-DETR models, the session expects a single input tensor named input and produces dets (bounding boxes) and labels (class logits) outputs.

Usage

Use after ONNX export to validate model correctness or to run inference in production without PyTorch.

Code Reference

Source Location

  • External: onnxruntime Python package

Signature

import onnxruntime as ort

session = ort.InferenceSession(
    onnx_path: str,
    providers: List[str] = ["CUDAExecutionProvider", "CPUExecutionProvider"],
)

results = session.run(
    output_names: Optional[List[str]],
    input_feed: Dict[str, np.ndarray],
)

Import

import onnxruntime as ort

I/O Contract

Inputs

Name Type Required Description
onnx_path str Yes Path to ONNX model file
providers List[str] No Execution providers (default: CUDA + CPU fallback)
input np.ndarray Yes Input image tensor of shape (1, 3, H, W), normalized

Outputs

Name Type Description
dets np.ndarray Bounding box predictions
labels np.ndarray Class logit predictions

Usage Examples

Validate Exported Model

import numpy as np
import onnxruntime as ort

# Load ONNX model
session = ort.InferenceSession(
    "output/inference_model.onnx",
    providers=["CUDAExecutionProvider", "CPUExecutionProvider"],
)

# Create sample input (matches model resolution)
input_tensor = np.random.randn(1, 3, 560, 560).astype(np.float32)

# Run inference
dets, labels = session.run(None, {"input": input_tensor})

print(f"Detections shape: {dets.shape}")  # (1, 300, 4)
print(f"Labels shape: {labels.shape}")    # (1, 300, num_classes)

Related Pages

Implements Principle

Requires Environment

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment