Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Online ml River Iterative Progressive Validation

From Leeroopedia


Knowledge Sources River River Docs Beating the Hold-Out: Bounds for K-fold and Progressive Cross-Validation
Domains Online_Learning Evaluation Monitoring
Last Updated 2026-02-08 16:00 GMT

Overview

Iterative progressive validation is a generator-based variant of progressive validation that yields intermediate evaluation results at configurable intervals, enabling real-time monitoring and learning curve visualization.

Description

While standard progressive validation (evaluate.progressive_val_score) returns only the final metric value after processing the entire dataset, iterative progressive validation (evaluate.iter_progressive_val_score) exposes the evaluation as a Python generator that yields intermediate checkpoints throughout the stream. This provides fine-grained visibility into how model performance evolves over time.

At each checkpoint, the generator yields a dictionary containing:

  • The current metric state (e.g., {'Accuracy': Accuracy: 89.50%})
  • The current step count (number of observations processed)
  • Optionally, the elapsed time, memory usage, and the most recent prediction

The step parameter controls the checkpoint frequency: setting step=100 yields a result every 100 observations. Setting step=1 yields a result after every single observation, which is useful for detailed learning curve analysis but may impact performance.

This principle follows the same predict-then-learn protocol as standard progressive validation, ensuring that all theoretical guarantees about honest evaluation are preserved. The only difference is in the output mechanism: a generator vs. a single return value.

The generator-based design follows the lazy evaluation principle: intermediate results are computed only when requested by the consumer. This makes it compatible with early stopping, conditional logic, and streaming visualization tools.

Usage

Use iterative progressive validation when:

  • You want to plot learning curves showing how model performance evolves with the number of observations.
  • You need to implement early stopping based on metric values.
  • You want real-time monitoring of model performance during training.
  • You need to collect intermediate results for logging, dashboards, or experiment tracking.
  • You want access to individual predictions alongside metric snapshots.

Theoretical Basis

The iterative variant uses the same predict-then-learn protocol as standard progressive validation. The key difference is the introduction of checkpoints that control when intermediate results are yielded.

Protocol with checkpoints:

function iter_progressive_val_score(dataset, model, metric, step):
    checkpoints = [step, 2*step, 3*step, ...]
    next_checkpoint = checkpoints.next()
    n = 0

    for (x, y) in dataset:
        y_pred = model.predict(x)
        metric.update(y, y_pred)
        model.learn_one(x, y)
        n += 1

        if n == next_checkpoint:
            yield {
                "Metric": metric,
                "Step": n,
                (optional) "Time": elapsed,
                (optional) "Memory": model.memory_usage,
                (optional) "Prediction": y_pred
            }
            next_checkpoint = checkpoints.next()

    # Yield final results if not at a checkpoint
    if n != prev_checkpoint:
        yield final_report

Learning curves: The sequence of yielded metric values m1,m2,,mk at steps s1,s2,,sk traces a learning curve. For well-behaved models:

  • The curve typically shows rapid improvement early on, followed by diminishing returns.
  • Comparison of learning curves across models reveals which model learns faster and which achieves better asymptotic performance.
  • Diverging or oscillating curves may indicate concept drift, poor hyperparameters, or model instability.

Relationship to progressive validation: The progressive_val_score function is implemented on top of iter_progressive_val_score by simply consuming the entire generator and returning the final metric. This means both functions share the same underlying implementation and produce identical evaluation results.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment