Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Interpretml Interpret LIME Tabular Explanation

From Leeroopedia


Metadata

Field Value
Sources Paper: LIME
Domains Interpretability, Feature_Attribution
Updated 2026-02-07

Overview

A model-agnostic explanation method that approximates a black-box model locally with a sparse linear model to explain individual predictions.

Description

LIME (Local Interpretable Model-agnostic Explanations) explains individual predictions by perturbing the input around the sample of interest, observing the black-box model's output on the perturbed samples, and fitting a simple interpretable model (e.g., sparse linear regression) to the local neighborhood. The coefficients of this local model serve as feature importance scores.

Usage

Use LIME when you need quick, local feature attributions for any model and want a simpler, faster alternative to SHAP.

Theoretical Basis

The LIME objective is:

ξ(x)=argmingGL(f,g,πx)+Ω(g)

Where f is the black-box model, g is the simple model, πx is a proximity kernel, and Ω penalizes complexity. LIME samples z' ~ N(0,1), maps to z via inverse, evaluates f(z), and fits g as weighted ridge regression.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment