Principle:Interpretml Interpret SHAP Kernel Explanation
Metadata
| Field | Value |
|---|---|
| Sources | Paper: SHAP, Paper: KernelSHAP |
| Domains | Interpretability, Feature_Attribution |
| Updated | 2026-02-07 |
Overview
A model-agnostic explanation method that computes Shapley values for each feature's contribution to individual predictions using a kernel-based approximation.
Description
SHAP (SHapley Additive exPlanations) Kernel Explanation uses game-theoretic Shapley values to attribute a model's prediction to individual features. KernelSHAP approximates Shapley values by sampling feature coalitions and fitting a weighted linear regression model. For each prediction, it produces per-feature attribution values that sum to the difference between the prediction and the expected model output over the reference dataset.
Usage
Use SHAP Kernel Explanation when you need local (per-sample) feature attributions for any black-box model. It is model-agnostic but computationally expensive for large datasets.
Theoretical Basis
The Shapley value for feature i is defined as:
KernelSHAP approximates this with a weighted kernel regression.