Implementation:Interpretml Interpret Process Terms
| Field | Value |
|---|---|
| Sources | InterpretML |
| Domains | Machine_Learning, Interpretability |
| Last Updated | 2026-02-07 12:00 GMT |
Overview
Process_Terms is a concrete tool for aggregating bagged EBM outputs into final term scores provided by the InterpretML library.
Description
The process_terms function takes the per-bag intercepts and score tensors from all outer bags, computes weighted averages to produce the final model scores, and calculates per-term standard deviations for confidence bands. It delegates to the native C++ library for efficient averaging. The resulting intercept, term scores, and standard deviations become the core attributes of a fitted EBM model (ebm.intercept_, ebm.term_scores_, ebm.standard_deviations_).
Usage
Import this function when finalizing an EBM model after all outer bags have completed training, or when merging EBM models that have been harmonized to the same bin structure.
Code Reference
Source Location
- Repository
interpretml/interpret- File
python/interpret-core/interpret/glassbox/_ebm/_utils.py- Lines
- 200--230
Signature
def process_terms(bagged_intercept, bagged_scores, bin_weights, bag_weights):
Import
from interpret.glassbox._ebm._utils import process_terms
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
bagged_intercept |
ndarray | Yes | Per-bag intercept values (shape: [n_bags, n_classes]) |
bagged_scores |
list[ndarray] | Yes | Per-term per-bag score tensors |
bin_weights |
list | Yes | Bin weight tensors for each term |
bag_weights |
ndarray | Yes | Per-bag weights for weighted averaging |
Outputs
| Index | Name | Type | Description |
|---|---|---|---|
| 0 | intercept | ndarray | Weighted average intercept across all bags |
| 1 | term_scores | list[ndarray] | Weighted average score tensors per term |
| 2 | standard_deviations | list[ndarray] | Standard deviation tensors per term (for confidence bands) |
Usage Examples
Internal Usage After Boosting
# process_terms is called internally after boosting completes:
# intercept, term_scores, standard_deviations = process_terms(
# bagged_intercept, bagged_scores, bin_weights, bag_weights
# )
# The results become ebm.intercept_, ebm.term_scores_, and ebm.standard_deviations_
from interpret.glassbox import ExplainableBoostingClassifier
ebm = ExplainableBoostingClassifier()
ebm.fit(X_train, y_train)
# After fitting:
print(ebm.intercept_) # Averaged intercept from process_terms
print(len(ebm.term_scores_)) # One score array per term