Implementation:Langchain ai Langchain BaseChatOpenAI Create Chat Result
| Knowledge Sources | |
|---|---|
| Domains | NLP, Data_Transformation |
| Last Updated | 2026-02-11 00:00 GMT |
Overview
Concrete tool for parsing OpenAI API responses into LangChain ChatResult objects provided by the LangChain OpenAI integration.
Description
The BaseChatOpenAI._create_chat_result() method takes a raw OpenAI API response (dict or openai.BaseModel) and constructs a ChatResult containing ChatGeneration objects. Each generation wraps an AIMessage with parsed content, tool calls, usage metadata (UsageMetadata with InputTokenDetails and OutputTokenDetails), and response metadata.
Usage
This is an internal method called by _generate() after receiving the raw API response. It handles both the Chat Completions API and the newer Responses API format.
Code Reference
Source Location
- Repository: langchain
- File: libs/partners/openai/langchain_openai/chat_models/base.py
- Lines: L1480-1596
Signature
def _create_chat_result(
self,
response: dict | openai.BaseModel,
generation_info: dict | None = None,
) -> ChatResult:
Import
# Internal method — accessed via ChatOpenAI instance
from langchain_openai import ChatOpenAI
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| response | dict or openai.BaseModel | Yes | Raw API response from OpenAI |
| generation_info | dict or None | No | Additional generation metadata |
Outputs
| Name | Type | Description |
|---|---|---|
| return | ChatResult | Parsed result with ChatGeneration objects containing AIMessage, usage_metadata, and response_metadata |
Usage Examples
Accessing Parsed Response Data
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini")
result = llm.invoke("Hello!")
# Content
print(result.content) # "Hello! How can I help you?"
# Usage metadata (parsed by _create_chat_result)
print(result.usage_metadata)
# {'input_tokens': 8, 'output_tokens': 12, 'total_tokens': 20,
# 'input_token_details': {'cached': 0},
# 'output_token_details': {'reasoning': 0}}
# Response metadata
print(result.response_metadata)
# {'model_name': 'gpt-4o-mini', 'system_fingerprint': 'fp_...', 'finish_reason': 'stop'}