Implementation:Interpretml Interpret PartialDependence
Metadata
| Field | Value |
|---|---|
| Sources | Repo: InterpretML |
| Domains | Interpretability, Feature_Analysis |
| Updated | 2026-02-07 |
| Type | API Doc |
Overview
Concrete tool for computing partial dependence plots for blackbox models provided by the InterpretML library.
Description
The PartialDependence class computes PDP data during __init__ (not during explain_global). It creates a grid of values for each feature, evaluates the model at each grid point (averaging over samples), and stores the results. The explain_global() method formats these precomputed results into an Explanation object for visualization.
Usage
Use this when you want to understand global feature effects for any model through the InterpretML show() pipeline.
Code Reference
| Field | Value |
|---|---|
| Source | interpretml/interpret |
| File | python/interpret-core/interpret/blackbox/_partialdependence.py |
| Lines | 72-191 |
| Import | from interpret.blackbox import PartialDependence
|
Signature:
class PartialDependence(ExplainerMixin):
available_explanations = ["global"]
explainer_type = "blackbox"
def __init__(self, model, data, feature_names=None, feature_types=None, num_points=10, std_coef=1.0):
def explain_global(self, name=None):
I/O Contract
Init inputs:
| Parameter | Type | Required | Default |
|---|---|---|---|
| model | predict function | Yes | |
| data | ndarray | Yes | |
| feature_names | list | No | |
| feature_types | list | No | |
| num_points | int | No | 10 |
| std_coef | float | No | 1.0 |
explain_global inputs:
| Parameter | Type | Required | Notes |
|---|---|---|---|
| name | str | No |
explain_global output: PDPExplanation with partial dependence curves per feature.
Usage Examples
from interpret.blackbox import PartialDependence
from interpret import show
pdp = PartialDependence(rf.predict_proba, X_train, feature_names=feature_names)
pdp_global = pdp.explain_global(name="Partial Dependence")
show(pdp_global)
show(pdp_global, key=0) # First feature's PDP