Implementation:Langchain ai Langchain ChatOpenAI Constructor
| Knowledge Sources | |
|---|---|
| Domains | NLP, LLM_Integration |
| Last Updated | 2026-02-11 00:00 GMT |
Overview
Concrete tool for initializing OpenAI-compatible chat model instances provided by the LangChain OpenAI integration package.
Description
The ChatOpenAI class (and its base BaseChatOpenAI) creates a fully configured chat model instance that communicates with the OpenAI API (or any OpenAI-compatible endpoint). It inherits from BaseChatModel and configures an OpenAI HTTP client, resolves model profiles for capability metadata, and validates API credentials at construction time.
ChatAnthropic and ChatOllama follow the same initialization pattern but target different provider APIs.
Usage
Import and instantiate ChatOpenAI when working with OpenAI models (GPT-4o, GPT-4o-mini, o1, etc.). For OpenAI-compatible providers (DeepSeek, Groq, Fireworks), use the respective subclass which inherits from BaseChatOpenAI and overrides the API base URL.
Code Reference
Source Location
- Repository: langchain
- File: libs/partners/openai/langchain_openai/chat_models/base.py
- Lines: L513-2260 (BaseChatOpenAI), L2262-3493 (ChatOpenAI)
Signature
class BaseChatOpenAI(BaseChatModel):
"""OpenAI chat model base class."""
# Key fields (Pydantic model):
model_name: str = Field(alias="model")
temperature: float | None = None
model_kwargs: dict[str, Any] = Field(default_factory=dict)
openai_api_key: SecretStr | None | Callable[[], str] | Callable[[], Awaitable[str]] = Field(alias="api_key", default=None)
openai_api_base: str | None = Field(alias="base_url", default=None)
openai_organization: str | None = Field(alias="organization", default=None)
openai_proxy: str | None = Field(default=None)
request_timeout: float | tuple[float, float] | Any | None = Field(default=None, alias="timeout")
max_retries: int = 2
streaming: bool = False
n: int = 1
max_tokens: int | None = None
default_headers: Mapping[str, str] | None = None
default_query: Mapping[str, object] | None = None
http_client: Any | None = None
http_async_client: Any | None = None
stop: list[str] | str | None = Field(default=None, alias="stop_sequences")
rate_limiter: BaseRateLimiter | None = Field(default=None, exclude=True)
disable_streaming: bool | Literal["tool_calling"] = False
stream_usage: bool = True
class ChatOpenAI(BaseChatOpenAI):
"""OpenAI chat model integration."""
pass
Import
from langchain_openai import ChatOpenAI
# Or for Anthropic:
from langchain_anthropic import ChatAnthropic
# Or for Ollama:
from langchain_ollama import ChatOllama
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| model | str | No (default: "gpt-4o-mini") | Model identifier (e.g., "gpt-4o", "gpt-4o-mini") |
| api_key | SecretStr or None | No (from env OPENAI_API_KEY) | OpenAI API key |
| temperature | float or None | No | Sampling temperature (0.0 to 2.0) |
| max_tokens | int or None | No | Maximum output tokens |
| streaming | bool | No (default: False) | Default streaming mode |
| rate_limiter | BaseRateLimiter or None | No | Optional rate limiter instance |
| base_url | str or None | No (from env) | Custom API base URL for compatible providers |
| timeout | float or None | No | Request timeout in seconds |
| max_retries | int | No (default: 2) | Maximum retry attempts |
Outputs
| Name | Type | Description |
|---|---|---|
| instance | ChatOpenAI | Initialized chat model implementing Runnable[LanguageModelInput, AIMessage] |
Usage Examples
Basic OpenAI Initialization
from langchain_openai import ChatOpenAI
# Initialize with defaults (reads OPENAI_API_KEY from environment)
llm = ChatOpenAI(model="gpt-4o-mini")
# Initialize with explicit configuration
llm = ChatOpenAI(
model="gpt-4o",
temperature=0.0,
max_tokens=1024,
api_key="sk-...",
)
# Use in a chain
response = llm.invoke("What is LangChain?")
print(response.content)
Anthropic Initialization
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(
model="claude-sonnet-4-20250514",
temperature=0.0,
max_tokens=1024,
)
With Rate Limiting
from langchain_openai import ChatOpenAI
from langchain_core.rate_limiters import InMemoryRateLimiter
rate_limiter = InMemoryRateLimiter(
requests_per_second=1,
check_every_n_seconds=0.1,
max_bucket_size=10,
)
llm = ChatOpenAI(
model="gpt-4o-mini",
rate_limiter=rate_limiter,
)