Implementation:Microsoft Autogen Response ToolEvents
| Knowledge Sources | |
|---|---|
| Domains | Event Processing, Tool Use, Observability, Agent Communication |
| Last Updated | 2026-02-11 00:00 GMT |
Overview
Concrete data structures for capturing and auditing tool execution events provided by Microsoft AutoGen: Response wraps the final agent output with its audit trail, ToolCallRequestEvent captures LLM tool call requests, and ToolCallExecutionEvent captures tool execution results.
Description
Response is a dataclass that packages the agent's final chat_message (a BaseChatMessage such as TextMessage, ToolCallSummaryMessage, or HandoffMessage) together with an optional inner_messages sequence. The inner_messages list contains the chronologically ordered audit trail of all events that occurred during the agent's processing, including tool call requests, tool call results, thought events, and any sub-agent events.
ToolCallRequestEvent is a Pydantic model extending BaseAgentEvent. Its content field is a list of FunctionCall objects, each containing:
name: the name of the function the LLM wants to callarguments: the JSON-serialized argumentscall_id: a unique identifier for correlating with the execution result
The source field identifies which agent emitted the event, and models_usage tracks token consumption for the LLM call that produced the tool call request.
ToolCallExecutionEvent is a Pydantic model extending BaseAgentEvent. Its content field is a list of FunctionExecutionResult objects, each containing:
call_id: matching the correspondingFunctionCallcontent: the serialized return value from the toolis_error: whether the execution resulted in an error
Together, these three structures provide a complete, inspectable record of the agent's tool-use behavior.
Usage
Use Response to access both the agent's final message and the full audit trail after calling on_messages(). Iterate over inner_messages to inspect tool calls and results. Use ToolCallRequestEvent and ToolCallExecutionEvent when consuming the async generator from on_messages_stream() to react to tool events in real time (e.g., for progress display in a UI).
Code Reference
Source Location
- Repository: Microsoft AutoGen
- File (Response):
python/packages/autogen-agentchat/src/autogen_agentchat/base/_chat_agent.py(lines 12-21) - File (ToolCallRequestEvent):
python/packages/autogen-agentchat/src/autogen_agentchat/messages.py(lines 445-454) - File (ToolCallExecutionEvent):
python/packages/autogen-agentchat/src/autogen_agentchat/messages.py(lines 490-499)
Signature
# Response
@dataclass(kw_only=True)
class Response:
chat_message: SerializeAsAny[BaseChatMessage]
inner_messages: Sequence[SerializeAsAny[BaseAgentEvent | BaseChatMessage]] | None = None
# ToolCallRequestEvent
class ToolCallRequestEvent(BaseAgentEvent):
content: List[FunctionCall]
type: Literal["ToolCallRequestEvent"] = "ToolCallRequestEvent"
# ToolCallExecutionEvent
class ToolCallExecutionEvent(BaseAgentEvent):
content: List[FunctionExecutionResult]
type: Literal["ToolCallExecutionEvent"] = "ToolCallExecutionEvent"
Import
from autogen_agentchat.base import Response
from autogen_agentchat.messages import ToolCallRequestEvent, ToolCallExecutionEvent
I/O Contract
Inputs (Response)
| Name | Type | Required | Description |
|---|---|---|---|
| chat_message | BaseChatMessage | Yes | The final user-facing message produced by the agent. Typically a TextMessage, ToolCallSummaryMessage, StructuredMessage, or HandoffMessage.
|
| inner_messages | Sequence[BaseAgentEvent or BaseChatMessage] or None | No | The chronologically ordered audit trail of events that occurred during processing. Includes ToolCallRequestEvent, ToolCallExecutionEvent, ThoughtEvent, MemoryQueryEvent, and sub-agent events. Defaults to None.
|
Inputs (ToolCallRequestEvent)
| Name | Type | Required | Description |
|---|---|---|---|
| content | List[FunctionCall] | Yes | The list of function calls the LLM requested. Each FunctionCall has name (str), arguments (str, JSON), and call_id (str).
|
| source | str | Yes | The name of the agent that produced this event (inherited from BaseAgentEvent).
|
| models_usage | RequestUsage or None | No | Token usage statistics for the LLM call that produced the tool call request (inherited from BaseAgentEvent).
|
Inputs (ToolCallExecutionEvent)
| Name | Type | Required | Description |
|---|---|---|---|
| content | List[FunctionExecutionResult] | Yes | The list of tool execution results. Each FunctionExecutionResult has call_id (str), content (str), and is_error (bool).
|
| source | str | Yes | The name of the agent that executed the tools (inherited from BaseAgentEvent).
|
Outputs
| Name | Type | Description |
|---|---|---|
| Response instance | Response | Provides .chat_message for the final output and .inner_messages for the audit trail.
|
| ToolCallRequestEvent instance | ToolCallRequestEvent | Provides .content (list of FunctionCall) and .to_text() for string representation.
|
| ToolCallExecutionEvent instance | ToolCallExecutionEvent | Provides .content (list of FunctionExecutionResult) and .to_text() for string representation.
|
Usage Examples
Basic Example: Inspecting Response Inner Messages
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage, ToolCallRequestEvent, ToolCallExecutionEvent
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def calculate(expression: str) -> str:
"""Evaluate a math expression."""
return str(eval(expression))
async def main():
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent(
name="calc_agent",
model_client=model_client,
tools=[calculate],
)
messages = [TextMessage(content="What is 42 * 17?", source="user")]
response = await agent.on_messages(messages, CancellationToken())
# Inspect the final message
print(f"Final: {response.chat_message.content}")
# Walk the audit trail
if response.inner_messages:
for event in response.inner_messages:
if isinstance(event, ToolCallRequestEvent):
for call in event.content:
print(f"LLM requested: {call.name}({call.arguments})")
elif isinstance(event, ToolCallExecutionEvent):
for result in event.content:
print(f"Tool returned: {result.content} (error={result.is_error})")
asyncio.run(main())
Streaming Example: Real-Time Event Processing
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.base import Response
from autogen_agentchat.messages import TextMessage, ToolCallRequestEvent, ToolCallExecutionEvent
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def search(query: str) -> str:
"""Search for information."""
return f"Results for: {query}"
async def main():
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent(
name="search_agent",
model_client=model_client,
tools=[search],
reflect_on_tool_use=True,
)
messages = [TextMessage(content="Search for Python tutorials", source="user")]
async for event in agent.on_messages_stream(messages, CancellationToken()):
if isinstance(event, ToolCallRequestEvent):
print(f"[REQUEST] Calling: {[c.name for c in event.content]}")
elif isinstance(event, ToolCallExecutionEvent):
print(f"[RESULT] Got: {[r.content for r in event.content]}")
elif isinstance(event, Response):
print(f"[DONE] {event.chat_message.content}")
asyncio.run(main())