Principle:Wandb Weave LLM Call Interception
| Knowledge Sources | |
|---|---|
| Domains | Observability, LLM_Operations |
| Last Updated | 2026-02-14 00:00 GMT |
Overview
A transparent interception pattern that captures LLM provider API calls and their streaming responses without modifying user code.
Description
LLM Call Interception uses the SymbolPatcher mechanism to replace provider SDK methods with traced wrappers. These wrappers delegate to the original method while capturing inputs, outputs, timing, and token usage. For streaming responses, an accumulator function merges incremental chunks into a complete response.
Usage
This principle operates transparently after patching is enabled. Users make normal LLM API calls (e.g., openai.chat.completions.create()) and the tracing happens automatically in the background.
Theoretical Basis
The interception uses a proxy wrapper pattern:
- Symbol Resolution: SymbolPatcher resolves the target method on the provider SDK (e.g., openai.resources.chat.completions.Completions.create).
- Replacement: The original method is saved and replaced with a weave.op-wrapped version.
- Transparent Delegation: The wrapper calls the original method, passing all arguments through.
- Streaming Accumulation: For streaming responses, an accumulator function merges ChatCompletionChunk objects into a complete ChatCompletion.
- Usage Extraction: Token counts and cost information are extracted from the response and included in the call summary.