Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Online ml River Metrics MAE

From Leeroopedia


Knowledge Sources
Domains Online_Learning, Evaluation_Metrics, Regression
Last Updated 2026-02-08 16:00 GMT

Overview

Mean Absolute Error measuring average absolute difference between predictions and true values.

Description

MAE computes the arithmetic mean of absolute errors |y_true - y_pred|. It provides a linear score where all individual differences are weighted equally, making it robust to outliers compared to squared error metrics. MAE is in the same units as the target variable, making it easily interpretable. Lower values indicate better predictive performance.

Usage

Use MAE when you want an intuitive, easily interpretable error metric that treats all errors equally. Unlike MSE/RMSE which heavily penalize large errors, MAE gives equal weight to all prediction errors. It's particularly useful when outliers should not dominate the error assessment, or when you want error metrics in the original units of measurement for straightforward interpretation.

Code Reference

Source Location

Signature

class MAE(metrics.base.MeanMetric, metrics.base.RegressionMetric):
    def __init__(self):
        pass

Import

from river import metrics

I/O Contract

Method Parameters Returns Description
update y_true (float), y_pred (float), [w] None Updates metric with true and predicted values
get - float Returns mean absolute error (lower is better)

Usage Examples

from river import metrics

y_true = [3, -0.5, 2, 7]
y_pred = [2.5, 0.0, 2, 8]

metric = metrics.MAE()

for yt, yp in zip(y_true, y_pred):
    metric.update(yt, yp)
    print(metric.get())
# 0.5
# 0.5
# 0.333
# 0.5

print(metric)
# MAE: 0.5

# Interpretation: On average, predictions are off by 0.5 units
# Errors: |3-2.5|=0.5, |-0.5-0|=0.5, |2-2|=0, |7-8|=1
# MAE = (0.5 + 0.5 + 0 + 1) / 4 = 0.5

# Compare with weighted samples
metric_weighted = metrics.MAE()
for yt, yp, w in zip([3, -0.5, 2, 7], [2.5, 0.0, 2, 8], [1, 1, 2, 1]):
    metric_weighted.update(yt, yp, w)

print(f"Weighted MAE: {metric_weighted.get():.3f}")
# Sample with y_true=2 gets double weight since its error is 0

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment