Implementation:Microsoft Autogen AssistantAgent Init Tools
| Knowledge Sources | |
|---|---|
| Domains | Tool Use, LLM Agents, Function Calling, Agent Configuration |
| Last Updated | 2026-02-11 00:00 GMT |
Overview
Concrete tool for creating LLM agents with tool-use capabilities provided by Microsoft AutoGen, focusing on the tool-related configuration parameters of AssistantAgent.__init__.
Description
The AssistantAgent constructor accepts tool-related parameters that configure how the agent discovers, invokes, and processes tool results. Tools can be provided directly as a list of BaseTool instances or plain callables (which are auto-wrapped into FunctionTool using their docstrings as descriptions). Alternatively, tools can be provided via one or more Workbench instances that manage tool lifecycle and discovery.
The tools and workbench parameters are mutually exclusive. When tools is provided, the agent internally wraps them in a StaticStreamWorkbench. When workbench is provided (a single workbench or a sequence), the agent delegates all tool management to those workbenches.
The constructor validates that the model client supports function calling before accepting tools. It also validates that all tool names are unique and do not conflict with handoff tool names.
The reflect_on_tool_use parameter controls whether the agent makes an additional LLM call after tool execution to interpret the results. The max_tool_iterations parameter controls how many rounds of tool calls the agent can perform. The tool_call_summary_format and tool_call_summary_formatter parameters control how tool results are formatted when reflection is disabled.
Usage
Use these parameters when creating an AssistantAgent that needs tool-use capabilities. Pass tools for simple setups with known tool lists, or pass workbench for advanced scenarios involving MCP servers or custom tool management. Configure reflect_on_tool_use and max_tool_iterations to control the agent's tool-use behavior.
Code Reference
Source Location
- Repository: Microsoft AutoGen
- File:
python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py(lines 724-859)
Signature
class AssistantAgent:
def __init__(
self,
name: str,
model_client: ChatCompletionClient,
*,
tools: List[BaseTool[Any, Any] | Callable[..., Any] | Callable[..., Awaitable[Any]]] | None = None,
workbench: Workbench | Sequence[Workbench] | None = None,
handoffs: List[HandoffBase | str] | None = None,
model_context: ChatCompletionContext | None = None,
description: str = "An agent that provides assistance with ability to use tools.",
system_message: str | None = "You are a helpful AI assistant. Solve tasks using your tools. Reply with TERMINATE when the task has been completed.",
model_client_stream: bool = False,
reflect_on_tool_use: bool | None = None,
max_tool_iterations: int = 1,
tool_call_summary_format: str = "{result}",
tool_call_summary_formatter: Callable[[FunctionCall, FunctionExecutionResult], str] | None = None,
output_content_type: type[BaseModel] | None = None,
output_content_type_format: str | None = None,
memory: Sequence[Memory] | None = None,
metadata: Dict[str, str] | None = None,
):
...
Import
from autogen_agentchat.agents import AssistantAgent
I/O Contract
Inputs (Tool-Related Parameters)
| Name | Type | Required | Description |
|---|---|---|---|
| tools | Callable | AsyncCallable] or None | No | A list of tools for the agent. Each element can be a BaseTool instance (e.g., FunctionTool) or a plain callable. Callables are auto-wrapped into FunctionTool using their docstring as the description. Cannot be used together with workbench.
|
| workbench | Workbench or Sequence[Workbench] or None | No | One or more workbench instances that manage tool lifecycle and discovery. Cannot be used together with tools. A single workbench is wrapped in a list internally.
|
| reflect_on_tool_use | bool or None | No | Whether to make an additional LLM call after tool execution to interpret results. Defaults to None, which resolves to True if output_content_type is set, otherwise False.
|
| max_tool_iterations | int | No | Maximum number of tool call loop iterations. Must be >= 1. Defaults to 1. Higher values allow the agent to call tools multiple times before producing a final response. |
| tool_call_summary_format | str | No | Format string for summarizing tool results when reflection is disabled. Uses {result} as placeholder. Defaults to "{result}".
|
| tool_call_summary_formatter | Callable or None | No | A custom function for formatting tool call summaries. Takes a FunctionCall and FunctionExecutionResult and returns a string. Overrides tool_call_summary_format when provided.
|
Outputs
| Name | Type | Description |
|---|---|---|
| instance | AssistantAgent | A configured agent instance with tool-use capabilities. The agent can be used in teams or run directly via run() or run_stream().
|
Usage Examples
Basic Example with Tools List
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_core.tools import FunctionTool
async def search_web(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent(
name="search_agent",
model_client=model_client,
tools=[search_web], # Auto-wrapped using docstring as description
)
Multi-Iteration Tool Use with Reflection
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_core.tools import FunctionTool
async def lookup_database(query: str) -> str:
"""Query the internal database."""
return f"DB result: {query}"
async def verify_fact(claim: str) -> str:
"""Verify a factual claim."""
return f"Verified: {claim} is correct"
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent(
name="research_agent",
model_client=model_client,
tools=[lookup_database, verify_fact],
reflect_on_tool_use=True,
max_tool_iterations=3,
)
Using a Workbench
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.tools.mcp import McpWorkbench, StdioServerParams
server_params = StdioServerParams(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
)
workbench = McpWorkbench(server_params=server_params)
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent(
name="file_agent",
model_client=model_client,
workbench=workbench,
)
Related Pages
Implements Principle
- Principle:Microsoft_Autogen_Tool_Augmented_Agent
- Environment:Microsoft_Autogen_Python_Runtime_Environment
- Environment:Microsoft_Autogen_LLM_Provider_API_Keys
- Environment:Microsoft_Autogen_Extension_Optional_Dependencies
- Heuristic:Microsoft_Autogen_Parallel_Tool_Call_Safety
- Heuristic:Microsoft_Autogen_Name_Uniqueness_Constraints