Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Microsoft BIPIA Inference Pipeline

From Leeroopedia
Field Value
sources BIPIA Repository
domains NLP, Inference, Distributed_Computing
last_updated 2026-02-14

Overview

Concrete tool for batched LLM inference over poisoned datasets provided by the BIPIA benchmark library.

Description

The inference() function in examples/run.py orchestrates the full pipeline for generating LLM responses to adversarially poisoned prompts. The function proceeds through the following stages:

  1. Accelerator initialization: Creates a HuggingFace Accelerator instance for distributed inference coordination. If tracking is enabled, logging integrations (TensorBoard, Weights & Biases, etc.) are configured.
  2. Dataset construction: Calls AutoPIABuilder.from_name(dataset_name) to instantiate the appropriate dataset builder (CodeIPIABuilder, QAIPIADataset, TableIPIABuilder, EmailIPIABuilder, or AbstractIPIADataset), then builds poisoned samples by combining context data with attack data. Supports optional stealth mode (base64-encoded attack instructions) and ignore guidance injection.
  3. LLM loading: Calls AutoLLM.from_name(llm_config_file) to resolve the model class from a YAML config or name string, then instantiates the model with the config, accelerator, and tensor parallelism settings.
  4. Prompt formatting: Maps the model-specific process_fn over the dataset. For GPT models, this constructs chat message lists or completion prompts. For HuggingFace models, this tokenizes prompts into input_ids with the model tokenizer. For vLLM models, this builds templated prompt strings via FastChat conversation templates.
  5. Resume filtering: If --resume is set and the output file exists, reads existing JSONL results, identifies already-processed messages by content, and filters them from the dataset. If all messages are already processed, exits early.
  6. DataLoader creation: Builds a PyTorch DataLoader with the appropriate collator -- DataCollatorWithPadding when input_ids are present (HuggingFace models, with left-padding and length-based sorting), or DefaultDataCollator otherwise (GPT API and vLLM models).
  7. Generation loop: Iterates over the DataLoader under torch.no_grad(), calling llm.generate(data) for each batch. Results are accumulated into a list of dictionaries and periodically flushed to the JSONL output file.
  8. Output writing: Writes the final accumulated results to the JSONL output path.

The function handles three model backends:

  • GPT API (sequential per-message): GPTModel.generate() iterates over messages individually, calling chat_completion() or completion() with automatic retry on rate limits, timeouts, and API errors.
  • HuggingFace (batched tensor generation): LLMModel.generate() passes batched input_ids tensors to model.generate() with a GenerationConfig (do_sample=False, max_new_tokens=512) and model-specific stopping criteria.
  • vLLM (batched text generation): vLLMModel.generate() passes batched prompt strings to the vLLM engine with SamplingParams (temperature=0, max_tokens=2048) and conversation-template-derived stop tokens.

Usage

The inference pipeline is invoked from the CLI via the examples/run.py entry point:

python examples/run.py \
    --mode inference \
    --dataset_name qa \
    --llm_config_file config/gpt35.yaml \
    --context_data_file benchmark/qa/test.jsonl \
    --attack_data_file benchmark/text_attack_test.json \
    --output_path results/gpt35_qa_inference.jsonl \
    --batch_size 8 \
    --seed 42 \
    --resume \
    --log_steps 10

Available dataset names: code, email, qa, abstract, table.

Available LLM configs include: gpt35, gpt4, alpaca_7b, alpaca_13b, vicuna_7b, vicuna_13b, vicuna_33b, llama2_7b, llama2_13b, baize_13b, koala_13b, stablevicuna, dolly, stablelm, mpt_33b, rwkv, oasst, chatglm2_6b, guanaco_33b, guanaco_65b, wizard_70b, and others.

Code Reference

Source
BIPIA repository, File: examples/run.py, Lines: L92-268
Primary function signature
def inference(args) -> None

Where args is a namespace object containing: dataset_name, context_data_file, attack_data_file, llm_config_file, output_path, batch_size, seed, enable_stealth, tensor_parallel_size, add_ign_guidance, resume, log_steps, with_tracking, report_to, logging_path, max_new_tokens.

Backend generate methods
# GPT API backend (bipia/model/gpt.py)
GPTModel.generate(self, data: Any, **kwargs) -> List[str]

# HuggingFace local backend (bipia/model/llm_worker.py)
LLMModel.generate(self, data: dict) -> List[str]

# vLLM tensor-parallel backend (bipia/model/vllm_worker.py)
vLLMModel.generate(self, data: dict) -> List[str]
Key imports
from bipia.model import AutoLLM
from bipia.data import AutoPIABuilder, DefaultDataCollator, DataCollatorWithPadding

I/O Contract

Inputs
An args namespace object produced by parameters.parse_args(), containing the following fields:
Parameter Type Description
dataset_name str One of "code", "email", "qa", "abstract", "table"
context_data_file str Path to the benign context data (e.g., benchmark/qa/test.jsonl)
attack_data_file str Path to the attack instruction data (e.g., benchmark/text_attack_test.json)
llm_config_file str Path to YAML config specifying the LLM (e.g., config/gpt35.yaml)
output_path str Path for the JSONL output file
batch_size int Number of examples per DataLoader batch
seed int or None Random seed for reproducibility
enable_stealth bool Whether to base64-encode attack instructions
tensor_parallel_size int Number of GPUs for vLLM tensor parallelism (default: 1)
add_ign_guidance bool Whether to prepend ignore-guidance defense text to prompts
resume bool Whether to resume from an existing output file
log_steps int or None Frequency (in steps) at which to flush intermediate results to disk
Outputs
A JSONL file written to output_path, where each line is a JSON object with the following fields:
Field Type Description
attack_name str Name of the attack variant (e.g., "Sentiment Analysis-0")
task_name str Name of the benchmark task
response str The LLM-generated response text
message str or list The prompt sent to the model (string or list of chat message dicts)
target str The ground-truth expected output (renamed from "ideal")
position str Position of the injected attack within the context ("middle" or "start")

Usage Examples

CLI invocation for GPT-3.5 on the QA dataset:

python examples/run.py \
    --mode inference \
    --dataset_name qa \
    --llm_config_file config/gpt35.yaml \
    --context_data_file benchmark/qa/test.jsonl \
    --attack_data_file benchmark/text_attack_test.json \
    --output_path results/gpt35_qa.jsonl \
    --batch_size 4 \
    --seed 42 \
    --resume \
    --log_steps 50

CLI invocation for a local vLLM model with tensor parallelism:

python examples/run.py \
    --mode inference \
    --dataset_name code \
    --llm_config_file config/mistral.yaml \
    --context_data_file benchmark/code/test.jsonl \
    --attack_data_file benchmark/code_attack_test.json \
    --output_path results/mistral_code.jsonl \
    --batch_size 16 \
    --tensor_parallel_size 4 \
    --seed 42 \
    --resume

Programmatic usage (reconstructed from run.py internals):

from functools import partial
from pathlib import Path
from torch.utils.data import DataLoader
from datasets import Dataset
from accelerate import Accelerator

from bipia.model import AutoLLM
from bipia.data import AutoPIABuilder, DefaultDataCollator, DataCollatorWithPadding

# Initialize accelerator
accelerator = Accelerator()

# Build poisoned dataset
pia_builder = AutoPIABuilder.from_name("qa")(seed=42)
pia_samples = pia_builder(
    "benchmark/qa/test.jsonl",
    "benchmark/text_attack_test.json",
    enable_stealth=False,
)
pia_dataset = Dataset.from_pandas(pia_samples)

# Load model
llm = AutoLLM.from_name("config/gpt35.yaml")(
    config="config/gpt35.yaml",
    accelerator=accelerator,
    tensor_parallel_size=1,
)

# Format prompts
processed = pia_dataset.map(
    partial(
        llm.process_fn,
        prompt_construct_fn=partial(
            pia_builder.construct_prompt,
            require_system_prompt=llm.require_system_prompt,
            ign_guidance="",
        ),
    ),
)

# Build DataLoader
if "input_ids" in processed.column_names:
    collator = DataCollatorWithPadding(llm.tokenizer)
else:
    collator = DefaultDataCollator()

dataloader = DataLoader(processed, batch_size=4, collate_fn=collator)

# Run inference
import torch
results = []
with torch.no_grad():
    for step, data in enumerate(dataloader):
        responses = llm.generate(data)
        for attack_name, task_name, target, response, message, position in zip(
            data["attack_name"], data["task_name"], data["target"],
            responses, data["message"], data["position"],
        ):
            results.append({
                "attack_name": attack_name,
                "task_name": task_name,
                "response": response,
                "message": message,
                "target": target,
                "position": position,
            })

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment