Principle:LaurentMazare Tch rs Batch Normalization
| Knowledge Sources | |
|---|---|
| Domains | Deep_Learning, Optimization |
| Last Updated | 2026-02-08 14:00 GMT |
Overview
Normalization technique that standardizes layer inputs across the batch dimension, stabilizing training and enabling higher learning rates.
Description
Batch Normalization (BatchNorm) normalizes the activations of each channel across the mini-batch by subtracting the mean and dividing by the standard deviation. Learnable affine parameters (gamma and beta) allow the network to undo the normalization when needed. During training, running statistics (mean and variance) are maintained via exponential moving average for use during inference, where batch statistics may not be available or meaningful.
Usage
Use batch normalization after convolutional or linear layers and before activation functions to stabilize training, reduce internal covariate shift, and allow higher learning rates. Use the ModuleT trait variant since BatchNorm has different behavior during training versus evaluation.
Theoretical Basis
Where:
- are batch mean and variance
- are learnable scale and shift parameters
- is a small constant for numerical stability (default 1e-5)