Implementation:Interpretml Interpret Explain Local
Appearance
| Field | Value |
|---|---|
| Sources | InterpretML |
| Domains | Interpretability, Visualization |
| Last Updated | 2026-02-07 12:00 GMT |
Overview
Explain_Local is a concrete tool for generating per-sample local explanations provided by the InterpretML EBM classes.
Description
The explain_local method decomposes predictions for specific samples into per-term contributions. For each sample, it evaluates each term's score tensor at the sample's bin indices, producing a horizontal bar chart showing how each feature contributed to the prediction. It also includes the intercept, predicted value, and optional actual value.
Usage
Call this method on a fitted EBM with specific samples to explain individual predictions.
Code Reference
Source Location
- Repository
interpretml/interpret- File
python/interpret-core/interpret/glassbox/_ebm/_ebm.py- Lines
- 2326--2466
Signature
def explain_local(self, X, y=None, name=None, init_score=None):
"""Provide local explanations for provided samples.
Args:
X: NumPy array for X to explain.
y: NumPy vector for y to explain.
name: User-defined explanation name.
init_score: Optional. Either a model or per-sample initialization score.
Returns:
An explanation object, visualizing feature-value pairs
for each sample as horizontal bar charts.
"""
Import
from interpret.glassbox import ExplainableBoostingClassifier
# Then: ebm.explain_local(X_test, y_test)
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
X |
np.ndarray | Yes | NumPy array of samples to explain |
y |
np.ndarray | No | NumPy vector of true labels for the samples |
name |
str | No | User-defined explanation name |
init_score |
model / array / None | No | Either a model or per-sample initialization score |
Outputs
| Name | Type | Description |
|---|---|---|
| EBMExplanation | EBMExplanation | Per-sample feature contributions as horizontal bar charts |
Usage Examples
Explaining Specific Samples
from interpret.glassbox import ExplainableBoostingClassifier
from interpret import show
ebm = ExplainableBoostingClassifier()
ebm.fit(X_train, y_train)
# Explain specific samples
local_exp = ebm.explain_local(X_test[:5], y_test[:5], name="Local Explanations")
# Visualize first sample's explanation
show(local_exp, key=0)
Related Pages
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment