Implementation:Langchain ai Langchain BaseChatOpenAI With Structured Output
| Knowledge Sources | |
|---|---|
| Domains | NLP, Data_Extraction, Structured_Output |
| Last Updated | 2026-02-11 00:00 GMT |
Overview
Concrete tool for extracting structured data from LLM responses provided by the LangChain OpenAI integration.
Description
The BaseChatOpenAI.with_structured_output() method returns a Runnable that chains the model with an output parser. It supports three extraction methods: function_calling (uses bind_tools + PydanticToolsParser), json_mode (forces JSON output + JsonOutputParser), and json_schema (OpenAI Structured Outputs with schema enforcement). The ChatOpenAI subclass defaults to json_schema method.
Usage
Call with_structured_output() with a Pydantic model or JSON schema to get type-safe responses from the model.
Code Reference
Source Location
- Repository: langchain
- File: libs/partners/openai/langchain_openai/chat_models/base.py
- Lines: L1961-2078 (BaseChatOpenAI), L3080-3170 (ChatOpenAI override)
Signature
def with_structured_output(
self,
schema: _DictOrPydanticClass | None = None,
*,
method: Literal[
"function_calling", "json_mode", "json_schema"
] = "function_calling",
include_raw: bool = False,
strict: bool | None = None,
tools: list | None = None,
**kwargs: Any,
) -> Runnable[LanguageModelInput, _DictOrPydantic]:
Import
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| schema | type[BaseModel] or dict or None | Yes | Pydantic model or JSON schema for output |
| method | str | No (default: "function_calling") | Extraction method |
| include_raw | bool | No (default: False) | Include raw AIMessage alongside parsed output |
| strict | bool or None | No | Enforce strict schema matching |
Outputs
| Name | Type | Description |
|---|---|---|
| return | Runnable | Returns Pydantic instance or dict matching schema; with include_raw=True returns {"raw": AIMessage, "parsed": ..., "parsing_error": ...} |
Usage Examples
Pydantic Schema
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
class Person(BaseModel):
name: str = Field(description="Person's full name")
age: int = Field(description="Person's age")
occupation: str = Field(description="Person's job title")
llm = ChatOpenAI(model="gpt-4o-mini")
structured_llm = llm.with_structured_output(Person)
result = structured_llm.invoke("Tell me about Marie Curie")
print(type(result)) # <class 'Person'>
print(result.name) # "Marie Curie"
print(result.age) # 66
With Raw Output
structured_llm = llm.with_structured_output(Person, include_raw=True)
result = structured_llm.invoke("Tell me about Einstein")
print(result["parsed"].name) # "Albert Einstein"
print(result["raw"].content) # Raw AIMessage
print(result["parsing_error"]) # None if successful