Principle:Langchain ai Langchain Response Parsing
| Knowledge Sources | |
|---|---|
| Domains | NLP, Data_Transformation |
| Last Updated | 2026-02-11 00:00 GMT |
Overview
A post-processing step that transforms raw provider API responses into LangChain's standardized message and metadata format.
Description
After the provider API returns a raw response (e.g., an OpenAI completion object), response parsing extracts the relevant information and structures it into LangChain's unified format:
- Content extraction: Text content, refusal messages, or audio data
- Tool call parsing: Structured tool call objects from the model's function calling output
- Usage metadata: Token counts (input, output, total) with breakdown details (cached tokens, reasoning tokens)
- Response metadata: Model name, system fingerprint, finish reason
This normalization ensures that downstream code (output parsers, chains, agents) can work with any provider's output through a consistent interface.
Usage
This principle is applied automatically after every API call. It is essential for multi-provider applications where the same downstream logic must handle responses from different LLM providers.
Theoretical Basis
Response parsing follows the Normalizer Pattern:
# Abstract algorithm (not real code)
def parse_response(raw_response):
for choice in raw_response.choices:
message = AIMessage(
content=choice.message.content,
tool_calls=parse_tool_calls(choice.message.tool_calls),
usage_metadata=parse_usage(raw_response.usage),
response_metadata=extract_metadata(raw_response),
)
yield ChatGeneration(message=message)
return ChatResult(generations=[...])