Principle:Microsoft Autogen Tool Event Processing
| Knowledge Sources | |
|---|---|
| Domains | Event Processing, Tool Use, Observability, Agent Communication |
| Last Updated | 2026-02-11 00:00 GMT |
Overview
Tool event processing is the practice of capturing, structuring, and propagating discrete events that occur during tool execution within an agent, enabling observability, auditing, and downstream consumption of the full tool-calling audit trail.
Description
When a tool-augmented agent executes tools during a conversation, several discrete events occur: the LLM requests a tool call, the tool is executed, and the result is produced. Tool event processing is the discipline of capturing each of these events as structured, typed objects and making them available to callers.
The key event types in a tool execution flow are:
- Tool call request events: Emitted when the LLM produces one or more function call requests. Each event contains the list of function calls the LLM wants to invoke, including the function name and the serialized arguments. This event captures the LLM's intent to use a tool.
- Tool call execution events: Emitted after the tools have been executed. Each event contains the list of function execution results, pairing each original call with its return value. This event captures the outcome of tool use.
- Response wrapping: The agent's final response wraps the user-facing chat message together with a sequence of inner messages that represent the full audit trail of events that occurred during processing. This includes tool call request events, tool call execution events, thought events, and any other intermediate messages.
This event structure serves several purposes:
- Observability: Developers and monitoring systems can inspect the inner messages to understand exactly what the agent did, what tools it called, what arguments it passed, and what results it received.
- Debugging: When an agent produces an unexpected response, the inner message trail provides a step-by-step record of the reasoning and tool-use process.
- Streaming: Events are yielded incrementally as they occur, enabling real-time UIs to display tool call progress, results, and the agent's evolving reasoning.
- Composition: When agents are nested (an agent calling an agent tool), events from the inner agent can be propagated to the outer agent's event stream, providing end-to-end visibility.
- Logging and auditing: The structured events can be serialized and stored for compliance, cost tracking, and post-hoc analysis.
Usage
Tool event processing is relevant when:
- You need to display tool execution progress to a user in real time (streaming events to a UI).
- You are debugging an agent's behavior and need to inspect the sequence of tool calls and results.
- You are building logging or auditing systems that record every action an agent takes.
- You are composing agents hierarchically and need to propagate sub-agent events to the parent.
- You are implementing custom response handling that distinguishes between text-only responses and tool-augmented responses.
Theoretical Basis
Tool event processing follows the Event Sourcing pattern, where state changes (tool calls and results) are captured as an immutable sequence of events rather than just the final state.
MODEL tool_event_processing:
EVENT ToolCallRequestEvent:
source: str # Name of the agent that made the request
content: List[FunctionCall]
- name: str # Name of the tool to call
- arguments: str # JSON-serialized arguments
- call_id: str # Unique identifier for this call
models_usage: Usage # Token usage for the LLM call that produced this
EVENT ToolCallExecutionEvent:
source: str # Name of the agent that executed the tools
content: List[FunctionExecutionResult]
- call_id: str # Matches the call_id from the request
- content: str # Serialized result from the tool
- is_error: bool # Whether the execution resulted in an error
STRUCTURE Response:
chat_message: BaseChatMessage # The final user-facing message
inner_messages: List[Event] # Full audit trail, including:
- ToolCallRequestEvent(s)
- ToolCallExecutionEvent(s)
- ThoughtEvent(s)
- Other intermediate messages
FLOW for a single tool iteration:
1. LLM produces tool calls
2. EMIT ToolCallRequestEvent(calls)
3. Execute each tool call
4. EMIT ToolCallExecutionEvent(results)
5. Append both events to inner_messages
6. Continue to next iteration or produce final response
FLOW for response assembly:
1. Collect all inner_messages from all iterations
2. Produce the final chat_message (text, summary, or structured)
3. Wrap in Response(chat_message, inner_messages)
4. Yield the Response to the caller
The call_id field provides correlation between request and execution events. Each function call in a request has a unique call_id, and the corresponding execution result carries the same call_id. This allows consumers to match requests with their results, even when multiple tool calls are executed in parallel.
The inner_messages list in the Response is ordered chronologically, providing a complete timeline of the agent's processing. This is distinct from the model context (which the LLM sees) and serves purely as an observability mechanism for the caller.