Implementation:Interpretml Interpret ShapKernel
Metadata
| Field | Value |
|---|---|
| Sources | Repo: InterpretML, Doc: SHAP |
| Domains | Interpretability, Feature_Attribution |
| Updated | 2026-02-07 |
| Type | Wrapper Doc (wraps shap.KernelExplainer) |
Overview
Wrapper tool for computing SHAP Kernel explanations for blackbox models, integrating the shap library into the InterpretML API.
Description
The ShapKernel class wraps shap.KernelExplainer to provide SHAP values through the InterpretML ExplainerMixin interface. It initializes a KernelExplainer with a model prediction function and reference data, then generates local explanations via explain_local().
Usage
Use this when you need SHAP-based local explanations for any model through the InterpretML show() visualization pipeline.
Code Reference
| Field | Value |
|---|---|
| Source | interpretml/interpret |
| File | python/interpret-core/interpret/blackbox/_shap.py |
| Lines | 12-68 |
| Import | from interpret.blackbox import ShapKernel
|
| External | shap.KernelExplainer (lazy import) |
Signature:
class ShapKernel(ExplainerMixin):
available_explanations = ["local"]
explainer_type = "blackbox"
def __init__(self, model, data, feature_names=None, feature_types=None, **kwargs):
def explain_local(self, X, y=None, name=None, **kwargs):
I/O Contract
Init inputs:
| Parameter | Type | Required | Notes |
|---|---|---|---|
| model | predict function | Yes | |
| data | reference data | Yes | |
| feature_names | list | No | |
| feature_types | list | No | |
| **kwargs | dict | No | Passed to shap.KernelExplainer |
explain_local inputs:
| Parameter | Type | Required | Notes |
|---|---|---|---|
| X | ndarray | Yes | |
| y | ndarray | No | |
| name | str | No |
explain_local output: FeatureValueExplanation with SHAP values as horizontal bar charts.
Usage Examples
from interpret.blackbox import ShapKernel
from interpret import show
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier().fit(X_train, y_train)
shap_exp = ShapKernel(rf.predict_proba, X_train[:100])
local_explanation = shap_exp.explain_local(X_test[:5], y_test[:5], name="SHAP")
show(local_explanation, key=0)