Implementation:Langchain ai Langgraph Create React Agent
| Attribute | Value |
|---|---|
| API | create_react_agent
|
| Workflow | ReAct_Agent_Creation |
| Type | API Doc |
| Repository | Langchain_ai_Langgraph |
| Source File | libs/prebuilt/langgraph/prebuilt/chat_agent_executor.py
|
| Source Lines | L278-1002 |
Overview
create_react_agent is the primary factory function for building ReAct-style agents in LangGraph. It constructs a compiled StateGraph that implements an iterative tool-calling loop: the "agent" node calls the language model, conditional edges route to the "tools" node when tool calls are present, and the loop continues until the model produces a final response. The function handles model initialization (including string-to-model conversion via init_chat_model), automatic tool binding, prompt composition, structured output, pre/post model hooks, and graph compilation.
Note: This function is deprecated in favor of create_agent from the langchain package. However, it remains widely used and documented here for reference.
Description
The create_react_agent function performs the following construction steps:
- State schema resolution: Defaults to
AgentState(orAgentStateWithStructuredResponseifresponse_formatis provided). Custom schemas must includemessagesandremaining_stepskeys. - Tool node setup: Accepts either a
ToolNodeinstance or a sequence of tools (which are wrapped in a newToolNode). Dictionary entries in the tools list are treated as LLM builtin tools. - Model configuration: String models are initialized via
init_chat_model. Tools are bound viabind_toolsif not already bound. Dynamic model callables are resolved at runtime. - Graph construction: A
StateGraphis built with "agent" and "tools" nodes, optional "pre_model_hook" and "post_model_hook" nodes, and an optional "generate_structured_response" node. - Edge wiring: Conditional edges implement the routing logic based on whether the model's response contains tool calls.
- Compilation: The graph is compiled with optional checkpointer, store, interrupt points, and debug configuration.
The function supports two versioning modes:
- v1: Tool node processes all tool calls from a single message together.
- v2 (default): Individual tool calls are dispatched via
Send, enabling per-call parallelism and isolation.
Usage
from langgraph.prebuilt import create_react_agent
from langchain_core.tools import tool
@tool
def check_weather(location: str) -> str:
"""Return the weather forecast for the specified location."""
return f"It's always sunny in {location}"
graph = create_react_agent(
"anthropic:claude-3-7-sonnet-latest",
tools=[check_weather],
prompt="You are a helpful assistant",
)
inputs = {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
for chunk in graph.stream(inputs, stream_mode="updates"):
print(chunk)
Code Reference
Source Location
| File | libs/prebuilt/langgraph/prebuilt/chat_agent_executor.py
|
| Function | create_react_agent, lines 278-1002
|
| Internal helpers | call_model (L661-694), acall_model (L696-721), should_continue (L831-859), route_tool_responses (L970-983)
|
Signature
def create_react_agent(
model: str
| LanguageModelLike
| Callable[[StateSchema, Runtime[ContextT]], BaseChatModel]
| Callable[[StateSchema, Runtime[ContextT]], Awaitable[BaseChatModel]]
| Callable[
[StateSchema, Runtime[ContextT]], Runnable[LanguageModelInput, BaseMessage]
]
| Callable[
[StateSchema, Runtime[ContextT]],
Awaitable[Runnable[LanguageModelInput, BaseMessage]],
],
tools: Sequence[BaseTool | Callable | dict[str, Any]] | ToolNode,
*,
prompt: Prompt | None = None,
response_format: StructuredResponseSchema
| tuple[str, StructuredResponseSchema]
| None = None,
pre_model_hook: RunnableLike | None = None,
post_model_hook: RunnableLike | None = None,
state_schema: StateSchemaType | None = None,
context_schema: type[Any] | None = None,
checkpointer: Checkpointer | None = None,
store: BaseStore | None = None,
interrupt_before: list[str] | None = None,
interrupt_after: list[str] | None = None,
debug: bool = False,
version: Literal["v1", "v2"] = "v2",
name: str | None = None,
) -> CompiledStateGraph
Import
from langgraph.prebuilt import create_react_agent
I/O Contract
Input Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
model |
LanguageModelLike | Callable | (required) | The language model. A string like "openai:gpt-4" triggers init_chat_model. A callable enables dynamic model selection with signature (state, runtime) -> BaseChatModel.
|
tools |
Callable | dict] | ToolNode | (required) | Tools for the agent. An empty list creates an agent with no tool-calling capability. Dict entries are treated as LLM builtin tools. |
prompt |
SystemMessage | Callable | Runnable | None | None |
Optional prompt. Strings become SystemMessage; callables/runnables receive full state and return model input.
|
response_format |
tuple[str, StructuredResponseSchema] | None | None |
Schema for structured final output. Adds a separate LLM call after the agent loop. Result stored in structured_response state key.
|
pre_model_hook |
None | None |
Node executed before every LLM call. Must return messages or llm_input_messages key.
|
post_model_hook |
None | None |
Node executed after every LLM call. Only available with version="v2".
|
state_schema |
None | None |
Custom state schema. Must include messages and remaining_steps. Defaults to AgentState.
|
context_schema |
None | None |
Schema for runtime context accessible via Runtime.context.
|
checkpointer |
None | None |
Checkpoint saver for state persistence across invocations. |
store |
None | None |
Persistent store for cross-thread data (e.g., user preferences). |
interrupt_before |
None | None |
Node names to interrupt before. Typically "agent" or "tools".
|
interrupt_after |
None | None |
Node names to interrupt after. |
debug |
bool |
False |
Enable debug mode for detailed execution logging. |
version |
Literal["v1", "v2"] |
"v2" |
"v1": single tool node processes all calls. "v2": individual calls dispatched via Send.
|
name |
None | None |
Name for the compiled graph. Used as the subgraph node name in multi-agent systems. |
Output
Returns: CompiledStateGraph -- a compiled LangChain Runnable with invoke, stream, ainvoke, and astream methods.
The compiled graph expects input in the form {"messages": [...]} and produces output with at least a "messages" key. If response_format is specified, the output also includes "structured_response".
Usage Examples
Minimal Agent
from langgraph.prebuilt import create_react_agent
from langchain_core.tools import tool
@tool
def multiply(a: int, b: int) -> int:
"""Multiply two numbers."""
return a * b
agent = create_react_agent("openai:gpt-4", tools=[multiply])
result = agent.invoke({"messages": [("user", "What is 6 times 7?")]})
print(result["messages"][-1].content)
Agent with Structured Output
from pydantic import BaseModel
from langgraph.prebuilt import create_react_agent
class MathResult(BaseModel):
answer: int
explanation: str
agent = create_react_agent(
"openai:gpt-4",
tools=[multiply],
response_format=MathResult,
)
result = agent.invoke({"messages": [("user", "What is 6 times 7?")]})
print(result["structured_response"])
# MathResult(answer=42, explanation="6 multiplied by 7 equals 42")
Agent with Pre-Model Hook for Message Trimming
from langchain_core.messages import RemoveMessage, REMOVE_ALL_MESSAGES
def trim_messages(state):
"""Keep only the last 10 messages."""
messages = state["messages"]
if len(messages) > 10:
return {
"messages": [RemoveMessage(id=REMOVE_ALL_MESSAGES)] + messages[-10:]
}
return {"messages": messages}
agent = create_react_agent(
"openai:gpt-4",
tools=[multiply],
pre_model_hook=trim_messages,
)
Agent with Human-in-the-Loop
from langgraph.checkpoint.memory import MemorySaver
agent = create_react_agent(
"openai:gpt-4",
tools=[multiply],
checkpointer=MemorySaver(),
interrupt_before=["tools"], # Pause before tool execution
)
# First invocation pauses before tools
result = agent.invoke(
{"messages": [("user", "Multiply 6 by 7")]},
config={"configurable": {"thread_id": "1"}},
)
# Review the tool call, then resume
result = agent.invoke(None, config={"configurable": {"thread_id": "1"}})
Dynamic Model Selection
from dataclasses import dataclass
from langchain_openai import ChatOpenAI
from langgraph.runtime import Runtime
@dataclass
class ModelContext:
model_name: str = "gpt-3.5-turbo"
gpt4 = ChatOpenAI(model="gpt-4")
gpt35 = ChatOpenAI(model="gpt-3.5-turbo")
def select_model(state, runtime: Runtime[ModelContext]):
model = gpt4 if runtime.context.model_name == "gpt-4" else gpt35
return model.bind_tools([multiply])
agent = create_react_agent(
select_model,
tools=[multiply],
context_schema=ModelContext,
)
Related Pages
- Langchain_ai_Langgraph_ReAct_Agent_Construction
- Environment:Langchain_ai_Langgraph_Python_Runtime_Environment
- Heuristic:Langchain_ai_Langgraph_Retry_Policy_Configuration
- Langchain_ai_Langgraph_ToolNode_Init
- Langchain_ai_Langgraph_Init_Chat_Model
- Langchain_ai_Langgraph_Pregel_Invoke_For_Agents
- Langchain_ai_Langgraph_AgentState_Schema