Principle:OpenGVLab InternVL Image Transform Pipeline
| Knowledge Sources | |
|---|---|
| Domains | Computer_Vision, Preprocessing, Data_Augmentation |
| Last Updated | 2026-02-07 00:00 GMT |
Overview
A configurable image normalization and augmentation pipeline that converts raw images into tensors suitable for vision transformer input.
Description
Image transform pipelines standardize raw images into the format expected by vision encoders. For vision-language models, this involves resizing to the expected resolution, converting to tensors, and normalizing pixel values using dataset-specific statistics (ImageNet, CLIP, or SigLIP normalization constants).
The pipeline differs between training and inference:
- Training: May include data augmentation (random resize crop) followed by normalization
- Inference: Uses deterministic resize and center crop followed by normalization
The choice of normalization statistics must match the pretrained vision encoder (e.g., ImageNet mean/std for models pretrained on ImageNet, CLIP statistics for CLIP-based encoders).
Usage
Use this principle when preparing image inputs for any vision transformer model. The specific normalization type should match the vision encoder's pretraining distribution.
Theoretical Basis
Image normalization transforms pixel values from [0, 255] to a distribution centered around zero:
Where and are channel-wise mean and standard deviation from the pretraining dataset:
| Normalization Type | Mean (R, G, B) | Std (R, G, B) |
|---|---|---|
| ImageNet | (0.485, 0.456, 0.406) | (0.229, 0.224, 0.225) |
| CLIP | (0.4815, 0.4578, 0.4082) | (0.2686, 0.2613, 0.2758) |
| SigLIP | (0.5, 0.5, 0.5) | (0.5, 0.5, 0.5) |