Implementation:Microsoft Autogen AssistantAgent Init
| Knowledge Sources | |
|---|---|
| Domains | AI Agents, Multi-Agent Systems, LLM Integration, Tool Use |
| Last Updated | 2026-02-11 00:00 GMT |
Overview
Concrete tool for creating LLM-powered agents with tool-use, handoff, and structured output capabilities provided by Microsoft AutoGen.
Description
AssistantAgent is the primary agent class in AutoGen's AgentChat layer. It wraps a model client and provides a complete agent lifecycle: receiving messages, querying the LLM, executing tool calls, reflecting on tool results, producing structured output, and handing off to other agents. The __init__ method validates all configuration, converts raw callables into framework tool objects, ensures name uniqueness across tools and handoffs, and sets up the conversation context.
Key behaviors configured at init time include:
- Tool wrapping: Plain Python functions (sync or async) are automatically wrapped in
FunctionToolobjects using their docstrings as descriptions. - Handoff registration: String handoff targets are converted to
HandoffBaseobjects, and handoff tools are created for the LLM to call. - Capability validation: If tools or handoffs are provided but the model does not support function calling, a
ValueErroris raised immediately. - Structured output: When
output_content_typeis set, aStructuredMessageFactoryis created andreflect_on_tool_useis enabled by default. - Workbench support: Tools can alternatively be provided through a
Workbenchinterface (mutually exclusive with thetoolsparameter).
Usage
Import AssistantAgent from autogen_agentchat.agents and instantiate it after creating a model client. Pass agent instances to team constructors (e.g., RoundRobinGroupChat, SelectorGroupChat) as participants.
Code Reference
Source Location
- Repository: Microsoft AutoGen
- File:
python/packages/autogen-agentchat/src/autogen_agentchat/agents/_assistant_agent.py(lines 724-859)
Signature
class AssistantAgent:
def __init__(
self,
name: str,
model_client: ChatCompletionClient,
*,
tools: List[BaseTool[Any, Any] | Callable[..., Any] | Callable[..., Awaitable[Any]]] | None = None,
workbench: Workbench | Sequence[Workbench] | None = None,
handoffs: List[HandoffBase | str] | None = None,
model_context: ChatCompletionContext | None = None,
description: str = "An agent that provides assistance with ability to use tools.",
system_message: str | None = "You are a helpful AI assistant. Solve tasks using your tools. Reply with TERMINATE when the task has been completed.",
model_client_stream: bool = False,
reflect_on_tool_use: bool | None = None,
max_tool_iterations: int = 1,
tool_call_summary_format: str = "{result}",
tool_call_summary_formatter: Callable[[FunctionCall, FunctionExecutionResult], str] | None = None,
output_content_type: type[BaseModel] | None = None,
output_content_type_format: str | None = None,
memory: Sequence[Memory] | None = None,
metadata: Dict[str, str] | None = None,
):
Import
from autogen_agentchat.agents import AssistantAgent
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| name | str | Yes | Unique identifier for the agent within a team. Used for message routing and handoff targeting. |
| model_client | ChatCompletionClient | Yes | Configured LLM client instance (e.g., OpenAIChatCompletionClient). Must support function_calling if tools or handoffs are provided. |
| tools | List[BaseTool or Callable] or None | No | Tools the agent can invoke. Plain functions are auto-wrapped in FunctionTool using their docstring as description. Cannot be used together with workbench. |
| workbench | Workbench or Sequence[Workbench] or None | No | Alternative tool provider interface. Mutually exclusive with the tools parameter. |
| handoffs | List[HandoffBase or str] or None | No | Agents this agent can hand off to. Strings are converted to HandoffBase with the string as the target name. |
| model_context | ChatCompletionContext or None | No | Manages conversation history sent to the LLM. Defaults to UnboundedChatCompletionContext (no truncation). |
| description | str | No | Human-readable description of the agent's role. Used by selector-based orchestrators to choose the next speaker. Defaults to "An agent that provides assistance with ability to use tools." |
| system_message | str or None | No | System prompt establishing the agent's persona and instructions. Set to None for no system message. Defaults to a standard helpful assistant prompt. |
| model_client_stream | bool | No | Whether to use streaming for LLM responses. Defaults to False. |
| reflect_on_tool_use | bool or None | No | Whether to make a follow-up LLM call after tool execution to synthesize results. Auto-enabled when output_content_type is set. Defaults to False otherwise. |
| max_tool_iterations | int | No | Maximum number of tool call rounds per turn. Must be >= 1. Defaults to 1. |
| tool_call_summary_format | str | No | Format string for summarizing tool call results. Uses {result} placeholder. Defaults to "{result}". |
| tool_call_summary_formatter | Callable or None | No | Custom function for formatting tool call summaries. Takes (FunctionCall, FunctionExecutionResult) and returns str. Overrides tool_call_summary_format if provided. |
| output_content_type | type[BaseModel] or None | No | Pydantic model class for structured output. When set, the agent's final response must conform to this schema. |
| output_content_type_format | str or None | No | Format string for presenting structured output. |
| memory | Sequence[Memory] or None | No | Memory modules for cross-conversation recall. |
| metadata | Dict[str, str] or None | No | Arbitrary key-value metadata attached to the agent. |
Outputs
| Name | Type | Description |
|---|---|---|
| instance | AssistantAgent | A configured agent instance ready to participate in teams. Produces TextMessage, ToolCallSummaryMessage, HandoffMessage, and optionally StructuredMessage types. |
Usage Examples
Basic Example
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent(
name="assistant",
model_client=model_client,
system_message="You are a helpful writing assistant.",
description="A general-purpose writing assistant.",
)
Agent with Tools
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"The weather in {city} is sunny, 72F."
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent(
name="weather_agent",
model_client=model_client,
tools=[get_weather],
system_message="You help users check the weather. Use your tools to answer questions.",
reflect_on_tool_use=True,
)
Agent with Handoffs
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
model_client = OpenAIChatCompletionClient(model="gpt-4o")
triage_agent = AssistantAgent(
name="triage",
model_client=model_client,
handoffs=["billing_agent", "technical_agent"],
system_message="You triage customer requests. Hand off to billing_agent for billing issues or technical_agent for technical issues.",
)
Related Pages
Implements Principle
- Principle:Microsoft_Autogen_Agent_Instantiation
- Environment:Microsoft_Autogen_Python_Runtime_Environment
- Environment:Microsoft_Autogen_LLM_Provider_API_Keys
- Heuristic:Microsoft_Autogen_Agent_Thread_Safety
- Heuristic:Microsoft_Autogen_Name_Uniqueness_Constraints
- Heuristic:Microsoft_Autogen_Model_Context_Limiting