Implementation:Roboflow Rf detr Roboflow Inference SDK
Appearance
| Knowledge Sources | |
|---|---|
| Domains | Deployment, Inference |
| Last Updated | 2026-02-08 15:00 GMT |
Overview
External tool documentation for running inference on deployed RF-DETR models via the Roboflow Inference SDK.
Description
The inference Python package provides get_model() to load a deployed model by its ID and model.infer() to run detection on images. It handles image encoding, API communication, and result parsing transparently.
Usage
Use after deploying a model to Roboflow to run inference from any Python environment.
Code Reference
Source Location
- External: inference Python package (Roboflow Inference SDK)
Signature
from inference import get_model
model = get_model(model_id="project_id/version")
result = model.infer(image)
Import
from inference import get_model
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| model_id | str | Yes | Model identifier ("project_id/version") |
| image | Union[str, np.ndarray, PIL.Image] | Yes | Image path, URL, array, or PIL Image |
Outputs
| Name | Type | Description |
|---|---|---|
| result | Dict | Detection results with bounding boxes, class labels, and confidence scores |
Usage Examples
Run Serverless Inference
from inference import get_model
# Load deployed model
model = get_model(model_id="my-detection-project/1")
# Run inference
result = model.infer("image.jpg")
# Access detections
for prediction in result["predictions"]:
print(f"Class: {prediction['class']}")
print(f"Confidence: {prediction['confidence']}")
print(f"Box: {prediction['x']}, {prediction['y']}, {prediction['width']}, {prediction['height']}")
Related Pages
Implements Principle
Requires Environment
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment