Principle:FMInference FlexLLMGen Data Wrangling Batch Inference
| Field | Value |
|---|---|
| Sources | FlexLLMGen, fm_data_tasks |
| Domains | Batch_Processing, Data_Wrangling |
| Last Updated | 2026-02-09 00:00 GMT |
Overview
A batch processing strategy that runs LLM inference over structured datasets by constructing prompt batches, generating predictions in fixed-size groups, and extracting labels from decoded output.
Description
Data wrangling tasks require running inference over hundreds or thousands of examples. The batch inference strategy groups examples into batches matching the model's gpu_batch_size * num_gpu_batches, pads prompts to uniform length, generates completions, and extracts predicted labels by removing the input prefix from decoded output. The batch_query_test function orchestrates this: it creates a new model instance per batch (to handle varying sequence lengths), runs generation, and collects predictions. Results are saved to .feather files for analysis.
Usage
Use for systematic evaluation of LLM data wrangling capabilities across large datasets. Prefer batch mode (--batch_run) over single-query mode for throughput.
Theoretical Basis
Batched inference amortizes model loading and I/O overhead across multiple examples. The batch size is constrained by gpu_batch_size * num_gpu_batches from the Policy. Prompt padding to uniform length is required for tensor operations.