Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Cleanlab Cleanlab Confident Joint Estimation

From Leeroopedia


Knowledge Sources
Domains Machine_Learning, Data_Quality
Last Updated 2026-02-09 19:00 GMT

Overview

Statistical method that estimates the joint distribution of noisy given labels and true labels by counting examples that exceed per-class confidence thresholds.

Description

The confident joint is a K x K matrix where entry (i, j) estimates the count of examples whose given (noisy) label is i and whose true (latent) label is j. It uses per-class thresholds derived from the average model confidence for each class to determine which examples are "confidently" assigned to each (given, true) pair. This approach avoids arbitrary global thresholds and instead adapts to the model's calibration on a per-class basis.

The confident joint is the core statistical object in Confident Learning from which all downstream quantities are derived: noise transition matrices, label issue identification, dataset health metrics, and class-level quality scores. An optional calibration step ensures that the row sums of the confident joint match the empirical label distribution, correcting for any systematic under- or over-counting.

Usage

Use when you need to estimate the noise structure of your dataset, including how often labels of one class are confused with another. The confident joint is required as input (or is computed internally) by functions that estimate noise matrices, find label issues, and compute dataset health summaries. It provides a complete picture of the label corruption pattern in the dataset.

Theoretical Basis

Given N examples with K classes, let labels be the array of noisy given labels and pred_probs be the (N, K) matrix of out-of-sample predicted probabilities.

Step 1: Compute per-class thresholds.

For each class j in 0, ..., K-1:

t_j = mean( pred_probs[i, j]  for all i where labels[i] == j )

This sets the threshold for class j as the average predicted probability of class j among examples that are labeled as class j. Well-separated classes will have high thresholds; confused classes will have lower thresholds.

Step 2: Assign confident labels.

For each example i with given label labels[i] = s:

confident_true_label = argmax_{j} ( pred_probs[i, j] >= t_j )

An example is "confidently" assigned true label j if its predicted probability for class j meets or exceeds the threshold t_j. If multiple classes exceed their thresholds, the class with the highest predicted probability is chosen.

Step 3: Count to form the confident joint.

C[s][j] = count of examples with given label s and confident true label j

Step 4 (optional): Calibrate.

Normalize C so that its row sums match the empirical label counts:

C_calibrated[s][j] = C[s][j] * (count(labels == s) / sum(C[s, :]))

This ensures the confident joint is consistent with the observed label distribution.

Related Pages

Implemented By

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment