Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Open compass VLMEvalKit Infer Data API

From Leeroopedia
Field Value
source VLMEvalKit
domain Vision, Evaluation, API_Integration

Overview

Concrete tool for running parallel API-based VLM inference with progress tracking provided by VLMEvalKit.

Description

infer_data_api() in vlmeval/inference.py handles inference for API-based models. It builds prompts for all samples, loads any existing partial results, filters completed samples, then dispatches remaining work to track_progress_rich() for parallel execution via ThreadPoolExecutor. Results are saved incrementally to a pickle file. Supports MMBench v1.0/v1.1 result reuse optimization.

Usage

Called internally by infer_data() when model.is_api is True. Can also be called directly for API-only evaluation.

Code Reference

  • Source: vlmeval/inference.py, Lines: L21-80
  • Signature:
def infer_data_api(
    model,                          # BaseAPI instance or model name string
    work_dir: str,                  # Output directory
    model_name: str,                # Model name for file naming
    dataset,                        # Dataset instance
    index_set: Optional[set] = None,  # Subset of indices to process
    api_nproc: int = 4,             # Number of parallel threads
    ignore_failed: bool = False     # Whether to re-try failed predictions
) -> dict:
    """
    Returns dict mapping sample indices to prediction strings.
    """
  • Import: from vlmeval.inference import infer_data_api

I/O Contract

Inputs

Parameter Type Description
model BaseAPI or str API model instance or model name string
work_dir str Output directory
model_name str Model name for file naming
dataset ImageBaseDataset Dataset instance
index_set Optional[set] Subset of indices to process
api_nproc int Number of parallel threads (default 4)
ignore_failed bool Whether to re-try failed predictions (default False)

Outputs

Output Type Description
Returns Dict[index, prediction_string] Dictionary mapping sample indices to prediction strings

Usage Examples

from vlmeval.inference import infer_data_api
from vlmeval.config import supported_VLM
from vlmeval.dataset import build_dataset

model = supported_VLM["GPT4o"]()
dataset = build_dataset("MMBench_DEV_EN_V11")
results = infer_data_api(
    model=model,
    work_dir="./results",
    model_name="GPT4o",
    dataset=dataset,
    api_nproc=8
)

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment