Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Interpretml Interpret Bagged Gradient Boosting

From Leeroopedia


Metadata
Sources InterpretML, Gradient Boosting, Bagging
Domains Machine_Learning, Ensemble_Methods
Last Updated 2026-02-07 12:00 GMT

Overview

A training algorithm that combines gradient boosting with bootstrap aggregation to learn additive term score functions for Generalized Additive Models.

Description

Bagged Gradient Boosting trains an EBM by cycling through each feature term and performing a small gradient boosting step. Unlike standard gradient boosting machines (GBMs) that grow full trees, EBMs boost one feature at a time in a round-robin fashion to maintain additivity. Each boosting round fits a tree to the negative gradient of the loss function for a single term, producing a lookup table update. Bagging (outer bags) provides uncertainty estimates via standard deviations, while inner bags enable early stopping. The cyclic boosting with small learning rates ensures the order of features has minimal impact on the learned functions.

Usage

Use this principle as the core training loop for EBM models. It applies whenever you need to learn piecewise-constant shape functions for each feature while maintaining interpretability of individual feature contributions.

Theoretical Basis

At each round r, for each term t:

gt(r)=L(y,F(r1)(x))F(r1)(x)

ht(r)=TreeFit(xt,gt(r))

F(r)(x)=F(r1)(x)+ηht(r)(xt)

Where η is a small learning rate, xt is the feature(s) for term t, and TreeFit learns a piecewise-constant function via optimal 1D splits.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment