Implementation:Interpretml Interpret Explain Global
| Field | Value |
|---|---|
| Sources | InterpretML |
| Domains | Interpretability, Visualization |
| Last Updated | 2026-02-07 12:00 GMT |
Overview
Explain_Global is a concrete tool for generating global model explanations provided by the InterpretML EBM classes.
Description
The explain_global method on ExplainableBoostingClassifier/Regressor generates a global explanation containing per-term shape function data, importance scores, and visualization metadata. It assembles continuous bar chart data for each term's shape function and a horizontal bar chart for overall feature importance. The returned EBMExplanation object can be passed to show() for interactive visualization.
Usage
Call this method on a fitted EBM to obtain a global explanation suitable for visualization, auditing, or programmatic analysis.
Code Reference
Source Location
- Repository
interpretml/interpret- File
python/interpret-core/interpret/glassbox/_ebm/_ebm.py- Lines
- 2047--2324
Signature
def explain_global(self, name=None):
"""Provide global explanation for model.
Args:
name: User-defined explanation name.
Returns:
An explanation object, visualizing feature-value pairs
as horizontal bar chart.
"""
Import
from interpret.glassbox import ExplainableBoostingClassifier
# Then: ebm.explain_global(name="My Global Explanation")
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
self |
fitted EBM model | Yes | The fitted ExplainableBoostingClassifier or ExplainableBoostingRegressor instance |
name |
str / None | No | User-defined explanation name |
Outputs
| Name | Type | Description |
|---|---|---|
| EBMExplanation | EBMExplanation (extends FeatureValueExplanation) | Global feature importance data, continuous bar charts per term, and horizontal bar overall summary |
Usage Examples
Generating and Visualizing a Global Explanation
from interpret.glassbox import ExplainableBoostingClassifier
from interpret import show
ebm = ExplainableBoostingClassifier()
ebm.fit(X_train, y_train)
# Generate global explanation
global_exp = ebm.explain_global(name="EBM Global")
# Visualize overall feature importance
show(global_exp)
# Visualize a specific feature's shape function
show(global_exp, key=0) # First term