Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Snorkel team Snorkel LabelModel Predict

From Leeroopedia
Knowledge Sources
Domains Weak_Supervision, Probabilistic_Inference
Last Updated 2026-02-14 20:00 GMT

Overview

Concrete tool for generating probabilistic or discrete labels from a trained label model, provided by the Snorkel library.

Description

The LabelModel.predict_proba() and LabelModel.predict() methods perform inference on a label matrix using the trained model parameters. predict_proba returns soft probability distributions over classes, while predict returns hard integer labels with configurable tie-breaking.

Internally, the prediction computes the posterior P(Y|L) by multiplying the learned conditional probabilities with the augmented label matrix in log-space, then normalizing.

Usage

Import these methods after training a LabelModel. Use predict_proba when training a downstream model with soft labels (recommended). Use predict when you need hard labels for evaluation or models that require discrete inputs.

Code Reference

Source Location

  • Repository: snorkel
  • File: snorkel/labeling/model/label_model.py
  • Lines: L389-467 (predict_proba L389-421, predict L423-467)

Signature

class LabelModel(nn.Module, BaseLabeler):
    def predict_proba(self, L: np.ndarray) -> np.ndarray:
        """
        Return label probabilities P(Y | lambda).

        Args:
            L: [n, m] matrix with values in {-1, 0, ..., k-1}.
        Returns:
            [n, k] array of probabilistic labels.
        """

    def predict(
        self,
        L: np.ndarray,
        return_probs: Optional[bool] = False,
        tie_break_policy: str = "abstain",
    ) -> Union[np.ndarray, Tuple[np.ndarray, np.ndarray]]:
        """
        Return predicted labels with tie-breaking.

        Args:
            L: [n, m] matrix with values in {-1, 0, ..., k-1}.
            return_probs: Whether to also return probability matrix.
            tie_break_policy: "abstain", "random", or "true-random".
        Returns:
            [n] array of integer labels; optionally ([n], [n, k]) tuple.
        """

Import

from snorkel.labeling.model import LabelModel

I/O Contract

Inputs

Name Type Required Description
L np.ndarray Yes Label matrix [n, m] with values in {-1, 0, ..., k-1}
return_probs bool No Also return probability matrix (predict only, default False)
tie_break_policy str No Tie-breaking: "abstain", "random", "true-random" (default "abstain")

Outputs

Name Type Description
predict_proba result np.ndarray [n, k] probability matrix where each row sums to 1
predict result np.ndarray [n] integer label array with values in {-1, 0, ..., k-1}
predict result (with probs) Tuple[np.ndarray, np.ndarray] ([n] labels, [n, k] probabilities)

Usage Examples

Probabilistic Labels

import numpy as np
from snorkel.labeling.model import LabelModel

# Assume label_model is already trained
L_train = np.array([[0, 0, -1], [1, 1, -1], [0, 0, -1]])

label_model = LabelModel(cardinality=2, verbose=False)
label_model.fit(L_train, n_epochs=500, seed=123)

# Get probabilistic labels
probs = label_model.predict_proba(L_train)
print(probs.shape)  # (3, 2)
print(probs)
# array([[0.99, 0.01],
#        [0.01, 0.99],
#        [0.99, 0.01]])

Discrete Labels with Tie-Breaking

# Hard predictions (abstain on ties)
preds = label_model.predict(L_train, tie_break_policy="abstain")
print(preds)  # array([0, 1, 0])

# With probabilities returned
preds, probs = label_model.predict(L_train, return_probs=True)

Related Pages

Implements Principle

Requires Environment

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment