Principle:Evidentlyai Evidently Report Configuration
| Knowledge Sources | |
|---|---|
| Domains | ML_Monitoring, Data_Quality, Evaluation |
| Last Updated | 2026-02-14 12:00 GMT |
Overview
A declarative evaluation configuration mechanism that assembles metrics and presets into an executable evaluation pipeline.
Description
Report Configuration is the process of declaring which metrics and presets to compute over datasets. A Report object is initialized with a list of MetricOrContainer instances (individual metrics like ValueDrift or preset containers like ClassificationQuality) and optional settings (metadata, tags, test inclusion).
The Report acts as a declarative specification: it describes what to compute, not how. The actual computation happens when Report.run() is called with datasets. This separation of configuration from execution enables:
- Reusable report templates across multiple datasets
- Composable metric combinations
- Preset-based shortcuts for common evaluation patterns
Usage
Use this principle as the central step in every Evidently evaluation workflow. Configure a Report after defining your data schema and before executing it with data.
Theoretical Basis
Report configuration follows the builder pattern where evaluation components are assembled declaratively:
# Pseudocode: Declarative evaluation spec
report = Report(
metrics=[ # WHAT to compute
ValueDrift("col1"),
ClassificationQuality(),
MeanValue("col2"),
],
include_tests=True, # Enable auto-tests
)
# Execution is deferred to report.run()
snapshot = report.run(data) # HOW and WHEN to compute
Presets expand into multiple individual metrics at execution time, allowing compact configuration.