Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Online ml River Metrics MSE

From Leeroopedia


Knowledge Sources
Domains Online_Learning, Evaluation_Metrics, Regression
Last Updated 2026-02-08 16:00 GMT

Overview

Mean Squared Error family including MSE, RMSE, and RMSLE for regression evaluation.

Description

This module provides squared error metrics. MSE computes the mean of squared errors (y_true - y_pred)², heavily penalizing large errors. RMSE takes the square root of MSE, returning error in original units. RMSLE computes root mean squared logarithmic error by applying log transformation log(y+1) before computing RMSE, useful for data with exponential trends or when relative errors matter more than absolute errors.

Usage

Use MSE/RMSE when large errors should be penalized more heavily than small errors, common in regression where outliers are important. RMSE is preferred over MSE for interpretability as it's in original units. Use RMSLE when you care about relative/percentage errors rather than absolute errors, or when your target variable spans several orders of magnitude. RMSLE is less sensitive to outliers and better for exponential growth patterns.

Code Reference

Source Location

Signature

class MSE(metrics.base.MeanMetric, metrics.base.RegressionMetric):
    def __init__(self):
        pass

class RMSE(MSE):
    def __init__(self):
        pass

class RMSLE(RMSE):
    def __init__(self):
        pass

Import

from river import metrics

I/O Contract

Method Parameters Returns Description
update y_true (float), y_pred (float), [w] None Updates metric with true and predicted values
get - float Returns error metric (lower is better)

Usage Examples

from river import metrics

y_true = [3, -0.5, 2, 7]
y_pred = [2.5, 0.0, 2, 8]

# Mean Squared Error
metric_mse = metrics.MSE()
for yt, yp in zip(y_true, y_pred):
    metric_mse.update(yt, yp)
    print(metric_mse.get())
# 0.25
# 0.25
# 0.1666
# 0.375

# Squared errors: 0.25, 0.25, 0, 1
# MSE = (0.25 + 0.25 + 0 + 1) / 4 = 0.375

# Root Mean Squared Error
metric_rmse = metrics.RMSE()
for yt, yp in zip(y_true, y_pred):
    metric_rmse.update(yt, yp)
    print(metric_rmse.get())
# 0.5
# 0.5
# 0.408248
# 0.612372

print(metric_rmse)
# RMSE: 0.612372
# RMSE = sqrt(0.375) = 0.612 (in original units)

# Root Mean Squared Logarithmic Error
metric_rmsle = metrics.RMSLE()
for yt, yp in zip(y_true, y_pred):
    metric_rmsle.update(yt, yp)

print(metric_rmsle)
# RMSLE: 0.357826

# RMSLE penalizes under-predictions more than over-predictions
# and is scale-independent

# Comparison: Large error impact
y_true2 = [1, 2, 3, 100]  # Last value is outlier
y_pred2 = [1, 2, 3, 80]   # 20-unit error on outlier

mse2 = metrics.MSE()
mae2 = metrics.MAE()

for yt, yp in zip(y_true2, y_pred2):
    mse2.update(yt, yp)
    mae2.update(yt, yp)

print(f"MSE: {mse2.get():.2f}, MAE: {mae2.get():.2f}")
# MSE heavily penalizes the large error (20² = 400)
# MAE treats it linearly (|20| = 20)

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment