Principle:Interpretml Interpret Bagged Gradient Boosting
| Metadata | |
|---|---|
| Sources | InterpretML, Gradient Boosting, Bagging |
| Domains | Machine_Learning, Ensemble_Methods |
| Last Updated | 2026-02-07 12:00 GMT |
Overview
A training algorithm that combines gradient boosting with bootstrap aggregation to learn additive term score functions for Generalized Additive Models.
Description
Bagged Gradient Boosting trains an EBM by cycling through each feature term and performing a small gradient boosting step. Unlike standard gradient boosting machines (GBMs) that grow full trees, EBMs boost one feature at a time in a round-robin fashion to maintain additivity. Each boosting round fits a tree to the negative gradient of the loss function for a single term, producing a lookup table update. Bagging (outer bags) provides uncertainty estimates via standard deviations, while inner bags enable early stopping. The cyclic boosting with small learning rates ensures the order of features has minimal impact on the learned functions.
Usage
Use this principle as the core training loop for EBM models. It applies whenever you need to learn piecewise-constant shape functions for each feature while maintaining interpretability of individual feature contributions.
Theoretical Basis
At each round r, for each term t:
Where η is a small learning rate, xt is the feature(s) for term t, and TreeFit learns a piecewise-constant function via optimal 1D splits.