Implementation:OpenGVLab InternVL Merge LoRA
Appearance
| Knowledge Sources | |
|---|---|
| Domains | Parameter_Efficient_Finetuning, Model_Deployment |
| Last Updated | 2026-02-07 00:00 GMT |
Overview
Concrete tool for merging LoRA adapter weights into the base model provided by the InternVL tools.
Description
The merge_lora.py script loads a LoRA-finetuned InternVL checkpoint, calls merge_and_unload() on each adapted submodule (vision_model and/or language_model), resets the config flags, and saves the merged model. The output is a standard InternVL checkpoint that can be loaded without PEFT.
Usage
Run this script after completing LoRA fine-tuning to produce a deployable model checkpoint. The output can be loaded directly with InternVLChatModel.from_pretrained().
Code Reference
Source Location
- Repository: InternVL
- File: internvl_chat/tools/merge_lora.py
- Lines: L1-31
Signature
# merge_lora.py — Complete script
import argparse
import torch
from internvl.model.internvl_chat import InternVLChatModel
from transformers import AutoTokenizer
argparse = argparse.ArgumentParser()
argparse.add_argument('input_path', type=str, help='Path to the input model')
argparse.add_argument('output_path', type=str, help='Path to the output model')
args = argparse.parse_args()
model = InternVLChatModel.from_pretrained(
args.input_path, low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).eval()
tokenizer = AutoTokenizer.from_pretrained(args.input_path, trust_remote_code=True)
if model.config.use_backbone_lora:
model.vision_model.merge_and_unload()
model.vision_model = model.vision_model.model
model.config.use_backbone_lora = 0
if model.config.use_llm_lora:
model.language_model.merge_and_unload()
model.language_model = model.language_model.model
model.config.use_llm_lora = 0
model.save_pretrained(args.output_path)
tokenizer.save_pretrained(args.output_path)
Import
python internvl_chat/tools/merge_lora.py <input_path> <output_path>
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| input_path | str | Yes | Path to LoRA-finetuned checkpoint directory |
| output_path | str | Yes | Path for saving merged model |
Outputs
| Name | Type | Description |
|---|---|---|
| Merged model | Directory | Standard InternVL checkpoint with LoRA weights folded into base weights |
| config.json | File | Config with use_backbone_lora=0, use_llm_lora=0 |
| tokenizer files | Files | Copied tokenizer files |
Usage Examples
Merge LoRA Checkpoint
# After LoRA fine-tuning completes:
python internvl_chat/tools/merge_lora.py \
./output/lora_checkpoint \
./output/merged_model
# Verify: load merged model without PEFT
python -c "
from internvl.model.internvl_chat import InternVLChatModel
model = InternVLChatModel.from_pretrained('./output/merged_model')
print('LoRA flags:', model.config.use_backbone_lora, model.config.use_llm_lora)
# Should print: LoRA flags: 0 0
"
Related Pages
Implements Principle
Requires Environment
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment