Principle:Tencent Ncnn Calibration Table Generation
| Knowledge Sources | |
|---|---|
| Domains | Quantization, Model_Optimization |
| Last Updated | 2026-02-09 00:00 GMT |
Overview
Process of computing per-layer optimal quantization scale factors by analyzing activation distributions on a calibration dataset.
Description
Calibration table generation is the core step of post-training quantization. It runs forward passes of the float32 model on calibration data, collecting activation statistics (histograms) for each quantizable layer. These statistics are then analyzed to find the optimal mapping from float32 to int8 that minimizes information loss.
Three calibration methods are supported:
- KL-divergence (default): Minimizes the Kullback-Leibler divergence between the original float32 distribution and the quantized int8 distribution. This is the TensorRT-style approach and generally gives the best accuracy.
- ACIQ (Analytical Clipping for Integer Quantization): Analytically computes the optimal clipping threshold assuming Gaussian or Laplace activation distributions. Faster than KL but may be less accurate for non-standard distributions.
- EQ (Equalization): Cross-layer equalization that balances activation ranges across layers before quantization.
Usage
Use after preparing the calibration dataset and optimizing the float32 model with ncnnoptimize. The output calibration table is consumed by ncnn2int8 to produce the quantized model.
Theoretical Basis
KL-Divergence Calibration:
For each layer, find the threshold T that minimizes:
Where P is the original float32 activation histogram and Q is the quantized-then-dequantized distribution at threshold T.
ACIQ approach:
Assuming Gaussian distribution of activations with mean μ and std σ, the optimal clipping threshold α minimizes MSE: