Implementation:Obss Sahi Get Prediction Full Image
| Knowledge Sources | |
|---|---|
| Domains | Object_Detection, Computer_Vision, Inference |
| Last Updated | 2026-02-08 12:00 GMT |
Overview
Concrete tool for running full-image detection as a complement to sliced detection, reusing the get_prediction() function with zero shift offset, provided by the SAHI library.
Description
Full-image detection in SAHI uses the same get_prediction() function as per-slice detection, but with shift_amount=[0, 0] and full_shape=None (inferred from the image). This pass is invoked within get_sliced_prediction() at sahi/predict.py:L313-326 when perform_standard_pred=True and the image was sliced into multiple tiles.
The full-image result is appended to the per-slice results and the combined set is passed to the postprocessing (merging) step. This ensures large objects are detected at their natural scale.
Usage
This implementation is invoked automatically within the sliced prediction pipeline. It uses the same get_prediction() API with different parameters. Use it explicitly when you want to combine full-image and slice-based detections manually.
Code Reference
Source Location
- Repository: sahi
- File: sahi/predict.py
- Lines: L56-135 (same function as per-slice detection)
- Invocation context: L313-326 within get_sliced_prediction()
Signature
def get_prediction(
image,
detection_model,
shift_amount: list | None = None, # [0, 0] for full image
full_shape=None, # inferred from image
postprocess: PostprocessPredictions | None = None,
verbose: int = 0,
exclude_classes_by_name: list[str] | None = None,
exclude_classes_by_id: list[int] | None = None,
) -> PredictionResult:
"""Perform prediction on the full image (no slicing).
For full-image detection, shift_amount defaults to [0, 0]
and full_shape is inferred from the input image dimensions.
"""
Import
from sahi.predict import get_prediction
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| image | str or np.ndarray | Yes | Full original image (path or numpy array) |
| detection_model | DetectionModel | Yes | Same initialized model used for per-slice detection |
| shift_amount | list[int] | No | Always [0, 0] for full-image detection (no remapping) |
| full_shape | list[int] | No | None (inferred from image dimensions) |
Outputs
| Name | Type | Description |
|---|---|---|
| return | PredictionResult | Contains .object_prediction_list with full-image detections (coordinates already in full-image space) |
Usage Examples
Full-Image Detection Pass
from sahi import AutoDetectionModel
from sahi.predict import get_prediction
model = AutoDetectionModel.from_pretrained(
model_type="ultralytics",
model_path="yolov8n.pt",
device="cuda:0",
)
# Full-image detection (no shift needed)
full_result = get_prediction(
image="path/to/large_image.jpg",
detection_model=model,
shift_amount=[0, 0],
)
print(f"Full-image detections: {len(full_result.object_prediction_list)}")
Combining with Slice Detections
from sahi.predict import get_prediction
from sahi.slicing import slice_image
# Slice the image
slice_result = slice_image(image="large.jpg", slice_height=640, slice_width=640)
# Collect per-slice predictions
all_preds = []
for i, img in enumerate(slice_result.images):
result = get_prediction(
image=img,
detection_model=model,
shift_amount=slice_result.starting_pixels[i],
full_shape=[slice_result.original_image_height, slice_result.original_image_width],
)
all_preds.extend(result.object_prediction_list)
# Add full-image detection
full_result = get_prediction(
image="large.jpg",
detection_model=model,
shift_amount=[0, 0],
)
all_preds.extend(full_result.object_prediction_list)
# all_preds now ready for merging step