Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Langchain ai Langchain Chat Model Implementation

From Leeroopedia

Template:Metadata

Overview

The standard approach for implementing a chat model by subclassing BaseChatModel (or BaseChatOpenAI for OpenAI-compatible APIs) and providing the required methods and properties.

Description

The Chat Model Implementation principle defines how to create a new chat model integration in the LangChain ecosystem. Every chat model must ultimately inherit from langchain_core.language_models.BaseChatModel, which provides the standard interface for invoking, streaming, batching, and tool-calling operations.

For providers whose API is compatible with the OpenAI chat completions format, LangChain provides BaseChatOpenAI (from langchain_openai) as a higher-level base class that handles most of the OpenAI-protocol boilerplate. Integrations that use this base class only need to configure authentication, API endpoints, and provider-specific behavior.

Required elements for any chat model subclass:

  • _llm_type property: Returns a string identifier for the model type (e.g., "chat-deepseek"). Used for serialization and logging.
  • _generate() method: The core method that takes a list of BaseMessage objects and returns a ChatResult. This is the synchronous generation entry point.
  • Model fields: Declared using Pydantic fields with appropriate types, defaults, and environment variable fallbacks via secret_from_env() and from_env().
  • validate_environment validator: A Pydantic model validator that constructs API clients and validates that required credentials are present.

Optional but common overrides:

  • _stream(): For streaming token-by-token responses.
  • bind_tools(): For customizing how tools are bound to the model.
  • with_structured_output(): For returning structured (typed) outputs.
  • _create_chat_result(): For post-processing raw API responses.
  • _get_ls_params(): For LangSmith observability integration.
  • lc_secrets: Maps constructor arguments to environment variable names for secret management.

Usage

Apply this principle when:

  • Building a new LangChain chat model integration for any LLM provider.
  • Wrapping an OpenAI-compatible API endpoint as a LangChain chat model.
  • Extending an existing chat model with provider-specific features (e.g., reasoning content, beta endpoints).

Theoretical Basis

The design follows the Template Method pattern: the base class defines the skeleton of the algorithm (invoke, stream, batch, tool-calling), and subclasses fill in the provider-specific steps (_generate, _stream). This allows the framework to provide consistent behavior (callbacks, retries, caching) while delegating API-specific logic to each integration.

from langchain_core.language_models import BaseChatModel
from langchain_core.messages import BaseMessage
from langchain_core.outputs import ChatResult

class ChatMyProvider(BaseChatModel):
    """Chat model for MyProvider API."""

    model_name: str
    api_key: SecretStr

    @property
    def _llm_type(self) -> str:
        return "chat-myprovider"

    def _generate(
        self,
        messages: list[BaseMessage],
        stop: list[str] | None = None,
        run_manager: CallbackManagerForLLMRun | None = None,
        **kwargs: Any,
    ) -> ChatResult:
        # Convert messages to provider format
        # Call provider API
        # Convert response to ChatResult
        ...

For OpenAI-compatible providers, the pattern simplifies to:

from langchain_openai.chat_models.base import BaseChatOpenAI

class ChatMyProvider(BaseChatOpenAI):
    """Chat model for OpenAI-compatible provider."""

    model_name: str = Field(alias="model")
    api_key: SecretStr | None = Field(
        default_factory=secret_from_env("MYPROVIDER_API_KEY", default=None),
    )
    api_base: str = Field(
        default_factory=from_env("MYPROVIDER_API_BASE", default="https://api.myprovider.com/v1"),
    )

    @property
    def _llm_type(self) -> str:
        return "chat-myprovider"

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment