Implementation:Obss Sahi Get Prediction
| Knowledge Sources | |
|---|---|
| Domains | Object_Detection, Computer_Vision, Inference |
| Last Updated | 2026-02-08 12:00 GMT |
Overview
Concrete tool for running single-image object detection inference with coordinate remapping provided by the SAHI library.
Description
get_prediction() performs object detection on a single image (or image slice) and returns a PredictionResult containing a list of ObjectPrediction objects. It orchestrates three internal steps:
- Read the image as a PIL Image and convert to a contiguous numpy array
- Call detection_model.perform_inference() to run the forward pass
- Call detection_model.convert_original_predictions() with shift_amount and full_shape to remap coordinates to full-image space
The function also applies optional class filtering (exclude_classes_by_name/exclude_classes_by_id) and an optional postprocessing step. It tracks timing for profiling via durations_in_seconds.
Usage
Use this function for running detection on individual image slices within the sliced inference loop. It is called once per slice by get_sliced_prediction(). Can also be used standalone for single-image inference without slicing.
Code Reference
Source Location
- Repository: sahi
- File: sahi/predict.py
- Lines: L56-135
Signature
def get_prediction(
image,
detection_model,
shift_amount: list | None = None,
full_shape=None,
postprocess: PostprocessPredictions | None = None,
verbose: int = 0,
exclude_classes_by_name: list[str] | None = None,
exclude_classes_by_id: list[int] | None = None,
) -> PredictionResult:
"""Perform prediction on a single image.
Args:
image: str or np.ndarray - image path or numpy array
detection_model: DetectionModel instance
shift_amount: [shift_x, shift_y] to remap slice to full image
full_shape: [height, width] of the full image
postprocess: Optional PostprocessPredictions for single-image NMS
verbose: 0=silent, 1=print timing
exclude_classes_by_name: Class names to exclude
exclude_classes_by_id: Class IDs to exclude
"""
Import
from sahi.predict import get_prediction
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| image | str or np.ndarray | Yes | Image path or numpy array to detect on |
| detection_model | DetectionModel | Yes | Initialized detection model from AutoDetectionModel |
| shift_amount | list[int] | No | [shift_x, shift_y] offset for coordinate remapping (default [0,0]) |
| full_shape | list[int] | No | [height, width] of original image (inferred if None) |
| postprocess | PostprocessPredictions | No | Optional single-image postprocessing |
| verbose | int | No | Verbosity level (0=silent, 1=print timing) |
| exclude_classes_by_name | list[str] | No | Class names to filter out |
| exclude_classes_by_id | list[int] | No | Class IDs to filter out |
Outputs
| Name | Type | Description |
|---|---|---|
| return | PredictionResult | Contains .object_prediction_list (list of ObjectPrediction with remapped coords) and .durations_in_seconds (dict with "prediction" and "postprocess" timing) |
Usage Examples
Single Slice Inference
from sahi import AutoDetectionModel
from sahi.predict import get_prediction
# Initialize model
model = AutoDetectionModel.from_pretrained(
model_type="ultralytics",
model_path="yolov8n.pt",
device="cuda:0",
)
# Run detection on a slice with coordinate remapping
result = get_prediction(
image=slice_numpy_array, # numpy array from slice_image()
detection_model=model,
shift_amount=[120, 240], # slice offset in full image
full_shape=[1920, 2560], # original image [height, width]
)
for pred in result.object_prediction_list:
print(f"Class: {pred.category.name}, Score: {pred.score.value:.2f}")
print(f"BBox (full image coords): {pred.bbox.to_xyxy()}")
Standalone Full-Image Inference
from sahi.predict import get_prediction
result = get_prediction(
image="path/to/image.jpg",
detection_model=model,
# No shift_amount needed for full image
)
print(f"Found {len(result.object_prediction_list)} objects")