Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:DistrictDataLabs Yellowbrick CVScores Visualizer

From Leeroopedia


Knowledge Sources
Domains Machine_Learning, Model_Selection, Visualization
Last Updated 2026-02-08 00:00 GMT

Overview

Concrete tool for visualizing cross-validated scores as a bar chart with a mean score reference line, provided by the Yellowbrick library.

Description

The CVScores visualizer displays cross-validated scores as a bar chart where each bar represents the score from a single cross-validation fold, with a horizontal dashed line indicating the mean score across all folds. This provides an immediate visual summary of both the model's average performance and the variability across different data partitions.

The class extends ModelVisualizer from the Yellowbrick base module. When fit(X, y) is called, the visualizer delegates to scikit-learn's sklearn.model_selection.cross_val_score function, passing the estimator, data, cross-validation strategy, and scoring metric. The returned per-fold scores are stored as cv_scores_ and their mean as cv_scores_mean_. The draw() method then renders a bar chart with fold indices on the x-axis and scores on the y-axis, plus a dashed horizontal line at the mean. An optional color parameter controls the bar and line color.

This is one of the simplest visualizers in the Yellowbrick model selection module, making it easy to get a quick overview of model stability.

Usage

Use this visualizer when you need a quick visual assessment of how consistently a model performs across cross-validation folds. It is appropriate for any scikit-learn estimator with a valid scoring metric.

Code Reference

Source Location

  • Repository: yellowbrick
  • File: yellowbrick/model_selection/cross_validation.py
  • Class Lines: L34-187 (class), L110 (__init__), L117-145 (fit)
  • Quick Method Lines: L195-278

Signature

class CVScores(ModelVisualizer):
    def __init__(self, estimator, ax=None, cv=None, scoring=None, color=None, **kwargs):

Import

from yellowbrick.model_selection import CVScores

I/O Contract

Inputs

Name Type Required Description
estimator scikit-learn estimator Yes An object implementing fit and predict. Cloned for each validation.
ax matplotlib.Axes No The axes object to plot on. Default: None (current axes).
cv int, CV generator, or iterable No Cross-validation splitting strategy. Default: None (3-fold).
scoring string, callable, or None No Scoring metric. Default: None (estimator's default scorer).
color string No Color for bars and mean line. Default: None.

The fit(X, y) method accepts:

Name Type Required Description
X array-like, shape (n_samples, n_features) Yes Training feature matrix.
y array-like, shape (n_samples,) No Target values.

Outputs

Name Type Description
cv_scores_ ndarray, shape (n_splits,) The cross-validated score from each fold.
cv_scores_mean_ float The mean cross-validated score across all folds.

Usage Examples

Basic Usage

from sklearn.svm import SVC
from yellowbrick.model_selection import CVScores

# Create and fit the visualizer
viz = CVScores(SVC(kernel="linear", C=1), cv=5, scoring="f1_macro")
viz.fit(X_train, y_train)
viz.show()

Quick Method

from yellowbrick.model_selection import cv_scores
from sklearn.svm import SVC

cv_scores(SVC(kernel="linear", C=1), X_train, y_train, cv=5, scoring="f1_macro")

Related Pages

Implements Principle

Requires Environment

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment