Implementation:FlagOpen FlagEmbedding VideoChat2 Choice Bench
| Knowledge Sources | |
|---|---|
| Domains | Video Understanding, Multiple Choice Evaluation, MLVU Benchmark |
| Last Updated | 2026-02-09 00:00 GMT |
Overview
An evaluation script for VideoChat2 model on multiple-choice video understanding tasks from the MLVU benchmark.
Description
This implementation provides a complete evaluation pipeline for the VideoChat2 model on MLVU (Multi-Task Long Video Understanding) benchmark's choice-based tasks. It loads a pre-trained VideoChat2 model with LoRA adaptations, processes video data with specific frame sampling strategies, and evaluates the model's performance across seven different task types: count, ego, needle, order, plotQA, anomaly recognition, and topic reasoning. The script handles video loading, frame extraction, positional embeddings, and generates predictions for multiple-choice questions using the VideoChat2 architecture with LLaMA backbone.
Usage
Use this script to evaluate VideoChat2 models on MLVU benchmark tasks that require selecting the best answer from multiple choices. It is designed for assessing video understanding capabilities on long videos with specific question-answering formats.
Code Reference
Source Location
- Repository: FlagOpen_FlagEmbedding
- File: research/MLVU/evaluation/models/videochat2/choice_bench.py
- Lines: 1-520
Key Components
class MLVU(Dataset):
def __init__(self, data_dir, data_list, num_segments=8, resolution=224):
# Dataset initialization
def __getitem__(self, idx):
# Returns video, question, answer, and task_type
def infer_mvbench(data_sample, system="", question_prompt='',
answer_prompt=None, return_prompt='',
system_q=False, print_res=False, system_llm=False):
# Inference function for video-question pairs
Import
# Model and utilities
from models import VideoChat2_it_vicuna
from utils.config import Config
from utils.easydict import EasyDict
# Video processing
from decord import VideoReader, cpu
from dataset.video_transforms import (
GroupNormalize, GroupScale, GroupCenterCrop,
Stack, ToTorchFormatTensor
)
# Deep learning
import torch
from transformers import StoppingCriteria, StoppingCriteriaList
from peft import get_peft_model, LoraConfig, TaskType
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| data_dir | str | Yes | Directory containing JSON annotation files |
| data_list | dict | Yes | Mapping of task types to (json_file, video_dir, data_type) tuples |
| num_segments | int | No | Number of frames to sample from each video (default: 16) |
| resolution | int | No | Target resolution for video frames (default: 224) |
| model checkpoint | str | Yes | Path to videochat2_7b_stage3.pth checkpoint file |
Outputs
| Name | Type | Description |
|---|---|---|
| test_all_choice.json | JSON file | Contains accuracy dictionary and result list with predictions |
| bench_all.json | JSON file | Final accuracy results per task type and overall average |
| Console output | text | Part accuracy and progress information |
Usage Examples
# Data configuration
data_list = {
"count": ("4_count.json", "/MLVU_all/video/count", "video"),
"ego": ("3_ego.json", "/MLVU_all/video/ego", "video"),
"needle": ("2_needle.json", "/MLVU_all/video/needle", "video"),
"order": ("5_order.json", "/MLVU_all/video/order", "video"),
"plotQA": ("1_plotQA.json", "/MLVU_all/video/plotQA", "video"),
"anomaly_reco": ("6_anomaly_reco.json", "/MLVU_all/video/anomaly_reco", "video"),
"topic_reasoning": ("7_topic_reasoning.json", "/MLVU_all/video/topic_reasoning", "video")
}
# Initialize dataset
dataset = MLVU(data_dir="/MLVU_all/json", data_list=data_list,
num_segments=16, resolution=224)
# Run inference on a sample
for example in dataset:
pred = infer_mvbench(
example,
system="Carefully watch this video and pay attention to every detail. Based on your observations, select the best option that accurately addresses the question.\n",
question_prompt="\nOnly give the best option.",
answer_prompt="Best option:(",
return_prompt='('
)