Principle:OpenGVLab InternVL Component Model Assembly
| Knowledge Sources | |
|---|---|
| Domains | Vision_Language, Model_Architecture, Pretraining |
| Last Updated | 2026-02-07 00:00 GMT |
Overview
A model initialization pattern that loads separate pretrained vision encoder and language model components and assembles them into a composite vision-language model with a newly initialized projector.
Description
During the first stage of multi-stage pretraining, there is no existing composite model to load from. Instead, the vision encoder (InternViT) and language model (InternLM2, Qwen2, etc.) are loaded separately from their own pretrained checkpoints and assembled into a single InternVLChatModel with a randomly-initialized MLP projector.
This Path B loading is distinct from Path A (loading a complete InternVL checkpoint) and is specifically used for:
- Stage 1 pretraining: Training the MLP projector to bridge vision and language
- Creating new model variants with different LLM backends
- Experimenting with different vision encoder sizes
Usage
Use component assembly when starting multi-stage pretraining from scratch, specifically for Stage 1 MLP warmup where the projector needs to be trained from randomly initialized weights.
Theoretical Basis
# Pseudo-code: Component assembly
# 1. Load pretrained vision encoder
vision_model = InternVisionModel.from_pretrained(vision_path)
# 2. Load pretrained language model
language_model = AutoModelForCausalLM.from_pretrained(llm_path)
# 3. Create composite config
config = InternVLChatConfig(
vision_config=vision_model.config,
llm_config=language_model.config,
)
# 4. Assemble — MLP projector initialized randomly
model = InternVLChatModel(config, vision_model, language_model)
# model.mlp1 is randomly initialized and must be trained (Stage 1)