Implementation:Langchain ai Langchain ChatModelIntegrationTests Subclass
Overview
Concrete standard test suite class for validating LangChain chat model implementations against real API endpoints, provided by ChatModelIntegrationTests in the langchain-tests package.
Description
ChatModelIntegrationTests is a base test class in langchain_tests.integration_tests that provides 35+ pre-built test methods for validating chat model implementations with real API calls. It extends ChatModelTests (the same base as ChatModelUnitTests) and adds network-dependent tests.
Required overrides:
chat_model_class: Property returning the chat model class under test.chat_model_params: Property returning a dict of initialization parameters (must include real credentials or rely on environment variables).
Optional overrides:
supports_json_mode: Whether the model supports JSON output mode.supports_image_inputs: Whether the model supports image content.supports_video_inputs: Whether the model supports video content.has_tool_calling: Whether the model supports tool/function calling.has_tool_choice: Whether the model supportstool_choiceparameter.supports_anthropic_inputs: Whether the model supports Anthropic-style message formats.
Included test methods (partial list):
test_invoke-- Basic message invocationtest_stream-- Streaming token generationtest_ainvoke-- Async invocationtest_astream-- Async streamingtest_batch-- Batch processingtest_tool_calling-- Single tool calltest_tool_calling_async-- Async tool calltest_tool_calling_with_no_arguments-- Tool with no paramstest_structured_output-- Structured output (Pydantic)test_structured_output_pydantic-- Pydantic class outputtest_structured_output_json_schema-- JSON schema outputtest_structured_output_optional_param-- Optional fieldstest_usage_metadata-- Token usage countingtest_usage_metadata_streaming-- Token usage in streaming modetest_agent_loop-- Multi-turn agent tool-use looptest_stop_sequence-- Stop sequence handlingtest_tool_message_histories_string_content-- Tool message with string contenttest_tool_message_histories_list_content-- Tool message with list contenttest_json_mode-- JSON output modetest_structured_few_shot_examples-- Few-shot structured outputtest_image_inputs-- Image content in messagestest_anthropic_inputs-- Anthropic-style message formatting
Usage
Use this class as the base for integration tests in every LangChain partner integration package. Place the test file at tests/integration_tests/test_chat_models.py. API credentials must be available as environment variables.
Code Reference
Source Location: libs/standard-tests/langchain_tests/integration_tests/chat_models.py, Lines 173-3455
Class Signature:
class ChatModelIntegrationTests(ChatModelTests):
"""Base class for chat model integration tests.
Test subclasses must implement the `chat_model_class` and
`chat_model_params` properties to specify what model to test and its
initialization parameters.
"""
@property
@abstractmethod
def chat_model_class(self) -> type[BaseChatModel]:
...
@property
def chat_model_params(self) -> dict[str, Any]:
return {}
Import:
from langchain_tests.integration_tests import ChatModelIntegrationTests
I/O Contract
| Input | Type | Description |
|---|---|---|
chat_model_class |
type[BaseChatModel] |
The chat model class to test |
chat_model_params |
dict[str, Any] |
Initialization parameters including real credentials |
| Environment variables | env vars | API keys (e.g., DEEPSEEK_API_KEY)
|
| Network access | TCP sockets | Live API endpoint must be reachable |
| Output | Type | Description |
|---|---|---|
| Test results | pytest outcomes | Pass/fail/skip/xfail for each of the 35+ test methods |
| API responses | AIMessage / AIMessageChunk |
Real model responses validated for correctness |
| Usage metadata | dict |
Token counts verified against expected structure |
Usage Examples
DeepSeek integration test subclass:
import pytest
from langchain_core.language_models import BaseChatModel
from langchain_core.tools import BaseTool
from langchain_tests.integration_tests import ChatModelIntegrationTests
from langchain_deepseek.chat_models import ChatDeepSeek
MODEL_NAME = "deepseek-chat"
class TestChatDeepSeek(ChatModelIntegrationTests):
@property
def chat_model_class(self) -> type[ChatDeepSeek]:
return ChatDeepSeek
@property
def chat_model_params(self) -> dict:
return {
"model": MODEL_NAME,
"temperature": 0,
}
@property
def supports_json_mode(self) -> bool:
return True
@pytest.mark.xfail(reason="Not yet supported.")
def test_tool_message_histories_list_content(
self,
model: BaseChatModel,
my_adder_tool: BaseTool,
) -> None:
super().test_tool_message_histories_list_content(model, my_adder_tool)
Running integration tests:
cd libs/partners/deepseek
# Set API credentials
export DEEPSEEK_API_KEY="your-api-key"
# Run all integration tests with timeout
uv run --group test --group test_integration pytest --timeout=30 tests/integration_tests/
# Run a specific test
uv run --group test pytest --timeout=30 tests/integration_tests/test_chat_models.py::TestChatDeepSeek::test_invoke
Marking unsupported tests as expected failures:
@pytest.mark.xfail(reason="Provider does not support this feature yet.")
def test_tool_message_histories_list_content(self, model, my_adder_tool):
super().test_tool_message_histories_list_content(model, my_adder_tool)