Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Scikit learn contrib Imbalanced learn sensitivity specificity support

From Leeroopedia


Knowledge Sources
Domains Machine_Learning, Model_Evaluation, Imbalanced_Learning
Last Updated 2026-02-09 03:00 GMT

Overview

Concrete tool for computing per-class sensitivity, specificity, and support provided by the imbalanced-learn library.

Description

The sensitivity_specificity_support function computes sensitivity (recall), specificity (true negative rate), and support (number of true instances) for each class. It also provides sensitivity_score and specificity_score convenience wrappers. Supports binary, multiclass, macro, weighted, and micro averaging.

Usage

Import this function (or the convenience wrappers sensitivity_score and specificity_score) to evaluate per-class or averaged sensitivity and specificity.

Code Reference

Source Location

  • Repository: imbalanced-learn
  • File: imblearn/metrics/_classification.py
  • Lines: L48-530 (sensitivity_specificity_support: L48-314, sensitivity_score: L315-414, specificity_score: L431-530)

Signature

def sensitivity_specificity_support(
    y_true,
    y_pred,
    *,
    labels=None,
    pos_label=1,
    average=None,
    warn_for=("sensitivity", "specificity"),
    sample_weight=None,
    zero_division="warn",
):
    """Returns (sensitivity, specificity, support) tuple."""

def sensitivity_score(
    y_true, y_pred, *, labels=None, pos_label=1, average="binary", sample_weight=None
):
    """Returns sensitivity (recall) score."""

def specificity_score(
    y_true, y_pred, *, labels=None, pos_label=1, average="binary", sample_weight=None
):
    """Returns specificity (true negative rate) score."""

Import

from imblearn.metrics import sensitivity_score, specificity_score
from imblearn.metrics import sensitivity_specificity_support

I/O Contract

Inputs

Name Type Required Description
y_true array-like of shape (n_samples,) Yes Ground truth labels
y_pred array-like of shape (n_samples,) Yes Predicted labels
average str or None No Averaging: 'binary', 'macro', 'weighted', 'micro', None
pos_label int No Positive class for binary (default: 1)

Outputs

Name Type Description
sensitivity float or ndarray Sensitivity (recall) per class or averaged
specificity float or ndarray Specificity per class or averaged
support ndarray Number of true instances per class

Usage Examples

from imblearn.metrics import sensitivity_score, specificity_score

y_true = [0, 1, 2, 0, 1, 2]
y_pred = [0, 2, 1, 0, 0, 1]

sens = sensitivity_score(y_true, y_pred, average="macro")
spec = specificity_score(y_true, y_pred, average="macro")
print(f"Sensitivity: {sens:.3f}, Specificity: {spec:.3f}")

# Per-class
from imblearn.metrics import sensitivity_specificity_support
sen, spe, sup = sensitivity_specificity_support(y_true, y_pred)
print(f"Per-class sensitivity: {sen}")
print(f"Per-class specificity: {spe}")
print(f"Support: {sup}")

Related Pages

Implements Principle

Requires Environment

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment