Principle:Interpretml Interpret EBM Model Merging
| Field | Value |
|---|---|
| Sources | Paper: InterpretML, Paper: Federated Learning |
| Domains | Federated_Learning, Model_Ensembling |
| Updated | 2026-02-07 |
Overview
A model combination procedure that merges multiple independently trained EBMs into a single unified model by harmonizing bin definitions and averaging score functions.
Description
EBM Model Merging enables federated learning scenarios where EBMs are trained independently on different data partitions (possibly in different locations) and then combined into a single model. The process involves validating compatibility (same feature set, same link function), harmonizing bin definitions across models (merging cut points and category mappings), remapping score tensors to the unified bins, and averaging the harmonized scores. The result is a single EBM that captures the knowledge from all input models.
Usage
Use this when multiple EBMs trained on different datasets need to be combined, such as in federated learning, privacy-preserving ML, or multi-site deployments.
Theoretical Basis
For M models with compatible structure:
- Unify bins: B_new = union of all bin boundaries across models
- Remap scores: interpolate each model's scores to the unified bins
- Average: F_merged(x) = (1/M) * Sum F_m(x) after remapping
This preserves interpretability while combining knowledge from multiple sources.