Principle:Interpretml Interpret Global Explanation Generation
| Metadata | |
|---|---|
| Sources | InterpretML, InterpretML Docs |
| Domains | Interpretability, Visualization |
| Last Updated | 2026-02-07 12:00 GMT |
Overview
A technique that summarizes the overall behavior of a machine learning model by computing feature-level importance scores and shape functions across the entire training distribution.
Description
Global Explanation Generation produces a high-level summary of how each feature and interaction term contributes to model predictions across all training samples. For EBMs, this involves computing term importances (either as mean absolute score or min-max range) and assembling the learned shape functions (score vs. feature value plots) for each term. The result is a set of visualizations: one continuous bar chart per feature showing the shape function, and an overall horizontal bar chart ranking features by importance. This enables practitioners to understand which features matter most and how they influence predictions.
Usage
Use this principle after training an EBM to understand the global model behavior. It is the primary tool for model auditing, feature importance analysis, and communicating model behavior to stakeholders.
Theoretical Basis
For an EBM with T terms, global importance of term t is:
(avg_weight)
or
(min_max)
The shape function ft(x) is directly available as the learned score lookup table, making EBMs inherently globally interpretable.