Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Online ml River Metrics Accuracy

From Leeroopedia


Knowledge Sources River River Docs
Domains Online_Learning Evaluation Classification
Last Updated 2026-02-08 16:00 GMT

Overview

Concrete tool for incrementally computing classification accuracy as the ratio of correct predictions to total predictions, backed by a streaming confusion matrix.

Description

The metrics.Accuracy class computes the proportion of correct predictions seen so far. It inherits from metrics.base.MultiClassMetric, which provides an incrementally updated confusion matrix (self.cm). Each call to update(y_true, y_pred) increments the appropriate cell of the confusion matrix. The get() method then computes accuracy by dividing the sum of the diagonal (total true positives across all classes) by the total weight of all observations.

The class supports:

  • Weighted observations: The update method accepts an optional weight parameter w (default 1.0).
  • Multi-class classification: Although used here for binary classification, the metric works with any number of classes.
  • Confusion matrix sharing: A confusion matrix can be shared between multiple metrics via the cm parameter, reducing redundant computation.
  • Revert support: Inherited from the base class, allowing observations to be "unlearned" if needed.

The get() method returns 0.0 if no observations have been processed (division by zero protection).

Usage

Import this class when you need to:

  • Track classification accuracy in an online learning evaluation loop.
  • Pass an accuracy metric to evaluate.progressive_val_score or evaluate.iter_progressive_val_score.
  • Monitor model performance in real time.

Code Reference

Source Location

File Lines
river/metrics/accuracy.py L8-L39

Signature

class Accuracy(metrics.base.MultiClassMetric):
    # Inherited from MultiClassMetric
    def update(self, y_true, y_pred, w=1.0)
    def revert(self, y_true, y_pred, w=1.0)

    # Accuracy-specific
    def get(self) -> float

Import

from river import metrics

metric = metrics.Accuracy()

I/O Contract

Inputs

Parameter Type Default Description
cm (constructor) None None Optional shared confusion matrix. If None, a new one is created.
y_true (to update) any hashable (required) The true class label.
y_pred (to update) any hashable (required) The predicted class label.
w (to update) float 1.0 Observation weight.

Outputs

Method Return Type Description
get() float Accuracy value in [0.0, 1.0], computed as total_true_positives / total_weight. Returns 0.0 if no observations have been processed.
__repr__() str Formatted string such as "Accuracy: 88.96%".

Usage Examples

Basic incremental accuracy:

from river import metrics

y_true = [True, False, True, True, True]
y_pred = [True, True, False, True, True]

metric = metrics.Accuracy()
for yt, yp in zip(y_true, y_pred):
    metric.update(yt, yp)

print(metric)
# Accuracy: 60.00%
print(metric.get())
# 0.6

With progressive validation:

from river import datasets, evaluate, linear_model, metrics, preprocessing

dataset = datasets.Phishing()
model = preprocessing.StandardScaler() | linear_model.LogisticRegression()
metric = metrics.Accuracy()

evaluate.progressive_val_score(dataset, model, metric)
# Accuracy: 88.96%

Manual predict-then-learn loop:

from river import datasets, linear_model, metrics, preprocessing

model = preprocessing.StandardScaler() | linear_model.LogisticRegression()
metric = metrics.Accuracy()

for x, y in datasets.Phishing():
    y_pred = model.predict_one(x)
    metric.update(y, y_pred)
    model.learn_one(x, y)

print(metric)
# Accuracy: 88.96%

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment