Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Interpretml Interpret Preserve

From Leeroopedia


Field Value
Sources InterpretML
Domains Visualization, Reporting
Last Updated 2026-02-07 12:00 GMT

Overview

Preserve is a concrete tool for saving explanation visualizations to persistent files provided by the InterpretML library.

Description

The preserve function exports an explanation visualization to a file or renders it as a static inline element. It handles Plotly figures (saved as standalone HTML), DataFrames (HTML tables), and HTML strings. If no file_name is given, it renders the visualization inline in the current notebook.

Usage

Call preserve() when you need to save an explanation to an HTML file for sharing, reporting, or archiving.

Code Reference

Source Location

Repository
interpretml/interpret
File
python/interpret-core/interpret/visual/_interactive.py
Lines
194--222

Signature

def preserve(explanation, selector_key=None, file_name=None, **kwargs):
    """Preserves an explanation's visualization for Jupyter cell, or file.
    Args:
        explanation: An explanation.
        selector_key: If integer, treat as index. Otherwise, looks up value in first column.
        file_name: If assigned, saves visualization to this filename.
        **kwargs: Kwargs passed to the underlying render/export call.
    """

Import

from interpret import preserve

I/O Contract

Inputs

Name Type Required Description
explanation Explanation Yes An Explanation object to preserve
selector_key int / str / None No If integer, treat as index; otherwise, looks up value in first column
file_name str / None No If assigned, saves visualization to this filename
**kwargs keyword arguments No Kwargs passed to the underlying render/export call

Outputs

Name Type Description
None None Side effect: saves HTML file or renders inline in notebook

Usage Examples

Saving Explanations to Files

from interpret.glassbox import ExplainableBoostingClassifier
from interpret import show, preserve

ebm = ExplainableBoostingClassifier()
ebm.fit(X_train, y_train)
global_exp = ebm.explain_global()

# Save overall importance to file
preserve(global_exp, file_name="global_importance.html")

# Save specific feature shape function
preserve(global_exp, selector_key=0, file_name="feature_0_shape.html")

# Render inline without saving
preserve(global_exp, selector_key=0)

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment