Principle:Predibase Lorax Continuous Batching Inference
| Knowledge Sources | |
|---|---|
| Domains | Inference_Optimization, Scheduling |
| Last Updated | 2026-02-08 02:00 GMT |
Overview
A scheduling strategy that dynamically adds and removes requests from running inference batches at each decode step, maximizing GPU utilization across multiple concurrent LoRA adapters.
Description
Continuous Batching solves the inefficiency of static batching where all requests in a batch must complete before new ones can start. In LoRAX, the scheduler:
- Maintains a queue of pending requests grouped by adapter
- At each decode step, evaluates if new requests can be added to the batch
- Manages token budgets for both prefill and decode operations
- Tracks adapter-specific state to limit the number of active adapters
The LoRAX-specific innovation is adapter-aware scheduling: the scheduler groups requests by adapter to maximize batched LoRA kernel efficiency (SGMV/BGMV).
Usage
This principle operates automatically in the LoRAX router. It is not directly configured by users, though the max-batch-total-tokens and max-batch-prefill-tokens launcher arguments control its behavior.
Theoretical Basis
Continuous batching maximizes batch size by filling gaps left by completed requests:
Pseudo-code:
# Continuous batching scheduler
while True:
# Try to add new requests from queue
new_entries = queue.next_batch(
token_budget=max_tokens - current_tokens,
adapters_in_use=active_adapters
)
batch = merge(current_batch, new_entries)
# Run one decode step
generations, next_batch = model.generate_token(batch)
# Return completed generations, continue with remaining
for gen in generations:
if gen.is_finished():
send_response(gen)
current_batch = next_batch