Principle:Langchain ai Langchain Tool Execution
| Knowledge Sources | |
|---|---|
| Domains | Agentic_AI, Tool_Use |
| Last Updated | 2026-02-11 00:00 GMT |
Overview
The execution of a tool call requested by an LLM, producing a ToolMessage that links the result back to the original call via a unique identifier.
Description
After the model emits tool calls, the application must execute each tool and package the results as ToolMessage objects. Each ToolMessage carries:
- content: The tool's output (string or structured data)
- tool_call_id: The ID linking this result to the specific tool call in the AIMessage
- name: The tool's name (optional but recommended)
This linking mechanism allows the model to match results to their corresponding requests when multiple tools are called in parallel.
Usage
Execute tool calls in the application code between receiving the model's tool call response and re-invoking the model with the results. This is the "action" step in the ReAct (Reason + Act) pattern.
Theoretical Basis
Tool execution follows the ReAct loop pattern:
# Abstract algorithm (not real code)
for tool_call in ai_message.tool_calls:
tool = tool_registry[tool_call.name]
result = tool.invoke(tool_call)
tool_messages.append(ToolMessage(
content=result,
tool_call_id=tool_call.id,
))