Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Scikit learn Scikit learn Search Execution

From Leeroopedia


Template:Principle Metadata

Overview

A parallel evaluation process that fits and scores an estimator across all parameter candidates and cross-validation folds.

Description

The Fit Loop

Search execution is the core computational engine of hyperparameter tuning. Once a parameter space has been defined and a cross-validation strategy selected, the search execution process carries out the following steps for every combination of parameter candidate and CV fold:

  1. Clone the base estimator -- Create a fresh, unfitted copy of the estimator so that each evaluation is independent.
  2. Set parameters -- Apply the candidate hyperparameter configuration to the cloned estimator via set_params.
  3. Fit on the training fold -- Train the configured estimator on the training portion of the current CV split.
  4. Score on the test fold -- Evaluate the fitted estimator on the held-out test portion using the specified scoring metric.
  5. Record results -- Store the score, fit time, and score time for this (candidate, fold) pair.

This clone-fit-score cycle is repeated for every combination of candidate parameters and CV folds. If there are C candidates and K folds, the total number of fit-score operations is C x K.

Parallel Execution

scikit-learn leverages joblib to parallelize the clone-fit-score operations. Each (candidate, fold) pair is an independent unit of work that can be dispatched to a separate process or thread. The n_jobs parameter controls how many workers run simultaneously, while pre_dispatch limits how many jobs are created ahead of time to manage memory consumption.

The parallelization strategy uses a product iteration pattern: candidates are the outer loop and CV splits are the inner loop. This means all folds for a given candidate are dispatched together, enabling efficient workload distribution.

Result Aggregation

After all fit-score operations complete, the raw results are aggregated:

  • Per-split scores are collected into arrays of shape (n_candidates, n_splits).
  • Mean and standard deviation are computed across folds for each candidate.
  • Candidates are ranked by their mean test score (lower rank is better).
  • Fit times and score times are similarly aggregated.
  • All results are packaged into the cv_results_ dictionary.

Error Handling

When a fit or score operation fails for a particular (candidate, fold) pair:

  • If error_score="raise", the exception propagates immediately.
  • If error_score is a numeric value (default np.nan), the error is caught, the specified value is used as the score, and a FitFailedWarning is issued. This allows the search to continue even when some configurations fail.

Usage

Search execution is triggered by calling the fit method on a search estimator (e.g., GridSearchCV or RandomizedSearchCV). The user passes in the training data (X, y) and any additional fit parameters. The search execution process is fully automated -- the user does not need to manage the clone-fit-score loop or result aggregation manually.

Theoretical Basis

Clone-Fit-Score Cycle

The clone-fit-score pattern ensures statistical independence between evaluations. By cloning the estimator for each (candidate, fold) pair, scikit-learn guarantees that:

  • No state leaks between different parameter configurations.
  • No state leaks between different CV folds.
  • Each score is an independent estimate of the performance of that configuration on unseen data.

This independence is essential for valid statistical comparisons between candidates.

Parallel Execution Model

The parallelization follows a data-parallel model where the same operation (fit-and-score) is applied to different inputs (different parameter/fold combinations). scikit-learn uses joblib.Parallel with delayed to express this pattern:

parallel(
    delayed(_fit_and_score)(
        clone(base_estimator), X, y,
        train=train, test=test, parameters=parameters, ...
    )
    for (cand_idx, parameters), (split_idx, (train, test)) in product(
        enumerate(candidate_params),
        enumerate(cv.split(X, y, ...)),
    )
)

The product of candidates and splits generates all work items, which joblib distributes across workers.

Result Aggregation

Results are aggregated using the _format_results method, which:

  • Reshapes raw score arrays from flat lists into (n_candidates, n_splits) matrices.
  • Computes weighted means and standard deviations across the fold axis.
  • Applies ranking using scipy.stats.rankdata with the "min" method (ties receive the minimum rank).
  • Handles NaN scores by excluding them from ranking and assigning them the worst rank.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment