Heuristic:Obss Sahi Confidence Threshold Setting
| Knowledge Sources | |
|---|---|
| Domains | Computer_Vision, Object_Detection, Optimization |
| Last Updated | 2026-02-08 12:00 GMT |
Overview
Guidelines for setting the model confidence threshold in sliced inference, including its interaction with postprocessing auto-switching behavior.
Description
The `model_confidence_threshold` parameter controls which detection predictions are retained after model inference. Only predictions with confidence >= this threshold pass through. In SAHI's sliced inference pipeline, this threshold has an additional side effect: when set below 0.1, SAHI automatically overrides the postprocessing algorithm from GREEDYNMM/IOS to NMS/IOU to prevent false positive proliferation.
This auto-switch behavior is critical tribal knowledge — it means the postprocessing strategy you configure may not be the one actually used if your confidence threshold is very low.
Usage
Use this heuristic when setting `model_confidence_threshold` in `get_sliced_prediction()`. Consider adjusting when:
- You need high recall (lower threshold) but get too many false positives
- The postprocessing auto-switch is unexpectedly changing your algorithm
- You want to force a specific postprocessing type regardless of confidence
The Insight (Rule of Thumb)
- Default: `model_confidence_threshold=0.25` (set on the detection model, not in `get_sliced_prediction`)
- Critical Boundary: Setting threshold below 0.1 triggers automatic postprocessing override to NMS/IOU
- Override Prevention: Set `force_postprocess_type=True` to prevent the auto-switch (available in `predict()`)
- Trade-off: Lower threshold = higher recall but more false positives and potentially different postprocessing. Higher threshold = higher precision but may miss true positives.
Reasoning
The 0.1 boundary exists because empirical testing showed that GREEDYNMM with IOS at very low confidence thresholds produces pathological merging behavior. Low-confidence predictions overlap extensively, and the merging algorithm combines them into phantom objects that do not exist. NMS with the stricter IOU metric is more appropriate in this regime because it suppresses rather than merges.
The auto-switch is logged as a warning to alert users:
Code evidence from `sahi/predict.py:44,548-554`:
LOW_MODEL_CONFIDENCE = 0.1
if not force_postprocess_type and model_confidence_threshold < LOW_MODEL_CONFIDENCE and postprocess_type != "NMS":
logger.warning(
f"Switching postprocess type/metric to NMS/IOU since confidence "
f"threshold is low ({model_confidence_threshold})."
)
postprocess_type = "NMS"
postprocess_match_metric = "IOU"
The `force_postprocess_type` flag (available only in the full `predict()` function, not in `get_sliced_prediction()`) allows advanced users to bypass this safety mechanism when they have domain-specific reasons for using GREEDYNMM at low confidence.