Principle:Interpretml Interpret Local Explanation Generation
| Metadata | |
|---|---|
| Sources | InterpretML |
| Domains | Interpretability, Visualization |
| Last Updated | 2026-02-07 12:00 GMT |
Overview
A technique that decomposes a single prediction into per-feature additive contributions to explain why a model made a specific decision.
Description
Local Explanation Generation breaks down an individual prediction into the contribution of each feature term. For EBMs, each sample's prediction is the intercept plus the sum of per-term scores. The local explanation displays these individual term contributions as a horizontal bar chart, showing how each feature pushed the prediction higher or lower. Since EBMs are additive, these contributions are exact (not approximations like SHAP or LIME for black-box models).
Usage
Use this principle when you need to explain individual predictions for regulatory compliance, debugging, or user-facing explanations. Unlike global explanations that show overall patterns, local explanations answer "Why did this specific sample get this prediction?"
Theoretical Basis
For sample xi, the prediction decomposes exactly:
Each term contribution ft(xi,St) is the local explanation for that term. This is an exact additive decomposition—not an approximation.