Implementation:Microsoft Autogen MagenticOneGroupChat
| Key | Value |
|---|---|
| id | Microsoft_Autogen_MagenticOneGroupChat |
| source | Microsoft_Autogen |
| category | Team |
Overview
Description
The MagenticOneGroupChat is a sophisticated multi-agent team implementation based on the Magentic-One architecture, a generalist multi-agent system for solving complex tasks. It orchestrates conversations between participant agents using an intelligent orchestrator that manages task planning, fact gathering, progress tracking, and adaptive replanning.
The team is designed for complex, multi-step tasks that require coordination between specialized agents. Unlike simpler group chat teams (RoundRobinGroupChat, SelectorGroupChat), MagenticOneGroupChat uses a specialized orchestrator that:
- Task Analysis: Breaks down tasks into facts, plans, and action items
- Dynamic Planning: Creates and updates plans based on progress
- Progress Monitoring: Tracks whether the task is complete, stuck in loops, or making progress
- Intelligent Routing: Selects the most appropriate agent for each step
- Stall Recovery: Detects when progress stalls and triggers replanning
- Final Answer Generation: Synthesizes a final response from the conversation transcript
The orchestrator uses LLM reasoning to make decisions about which agent should speak next and what instructions to give them.
Usage
MagenticOneGroupChat is used for complex, open-ended tasks that require:
- Multi-step problem solving with planning and execution
- Coordination between multiple specialized agents
- Adaptive behavior when initial approaches fail
- Reasoning about task completion and progress
The team does not support using other teams as participants (unlike RoundRobinGroupChat and SelectorGroupChat). All participants must be ChatAgent instances.
Key configuration parameters:
- participants: List of specialized ChatAgent instances (e.g., researcher, coder, writer)
- model_client: LLM client for orchestrator reasoning
- max_turns: Maximum number of conversation turns (default: 20)
- max_stalls: Maximum stalls before replanning (default: 3)
- termination_condition: Optional condition to stop early
- final_answer_prompt: Custom prompt for generating final response
Code Reference
Source Location
- Repository: https://github.com/microsoft/autogen
- File Path: /tmp/kapso_repo_2mr4n2g4/python/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_magentic_one/_magentic_one_group_chat.py
- Lines: 1-210
Signature
class MagenticOneGroupChat(BaseGroupChat, Component[MagenticOneGroupChatConfig]):
def __init__(
self,
participants: List[ChatAgent],
model_client: ChatCompletionClient,
*,
name: str | None = None,
description: str | None = None,
termination_condition: TerminationCondition | None = None,
max_turns: int | None = 20,
runtime: AgentRuntime | None = None,
max_stalls: int = 3,
final_answer_prompt: str = ORCHESTRATOR_FINAL_ANSWER_PROMPT,
custom_message_types: List[type[BaseAgentEvent | BaseChatMessage]] | None = None,
emit_team_events: bool = False,
):
...
Import
from autogen_agentchat.teams import MagenticOneGroupChat
I/O Contract
Inputs
| Parameter | Type | Required | Description |
|---|---|---|---|
| participants | List[ChatAgent] | Yes | List of chat agents (at least one required). Cannot include teams. |
| model_client | ChatCompletionClient | Yes | LLM client for orchestrator reasoning and decision-making |
| name | None | No | Team name (default: "MagenticOneGroupChat") |
| description | None | No | Team description (default: "A team of agents.") |
| termination_condition | None | No | Condition to stop team early (default: None) |
| max_turns | None | No | Maximum conversation turns (default: 20) |
| runtime | None | No | Custom runtime (default: creates new runtime) |
| max_stalls | int | No | Maximum stalls before replanning (default: 3) |
| final_answer_prompt | str | No | LLM prompt for generating final answer (default: ORCHESTRATOR_FINAL_ANSWER_PROMPT) |
| custom_message_types | BaseChatMessage]] | None | No | Custom message types used by agents |
| emit_team_events | bool | No | Whether to emit team events through run_stream (default: False) |
Outputs
| Method | Return Type | Description |
|---|---|---|
| run | TaskResult | Execute team synchronously, return final result with messages and stop reason |
| run_stream | AsyncGenerator[...] | Stream execution with intermediate messages, events, and final TaskResult |
Usage Examples
Basic Usage
import asyncio
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import MagenticOneGroupChat
from autogen_agentchat.ui import Console
async def main() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o")
# Create specialized assistant
assistant = AssistantAgent(
"Assistant",
model_client=model_client,
)
# Create team
team = MagenticOneGroupChat([assistant], model_client=model_client)
# Run team with task
await Console(team.run_stream(task="Provide a different proof to Fermat last theorem"))
asyncio.run(main())
Multi-Agent Team
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import MagenticOneGroupChat
async def multi_agent_example():
model_client = OpenAIChatCompletionClient(model="gpt-4o")
# Create specialized agents
researcher = AssistantAgent(
"Researcher",
description="Expert at finding and analyzing information",
model_client=model_client
)
coder = AssistantAgent(
"Coder",
description="Expert at writing and debugging code",
model_client=model_client
)
writer = AssistantAgent(
"Writer",
description="Expert at writing clear documentation",
model_client=model_client
)
# Create team
team = MagenticOneGroupChat(
participants=[researcher, coder, writer],
model_client=model_client,
max_turns=30,
max_stalls=5
)
# Run complex task
result = await team.run(
task="Research Python async patterns, implement an example, and write documentation"
)
print(f"Task completed: {result.stop_reason}")
for message in result.messages:
print(f"{message.source}: {message.content}")
With Termination Condition
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.teams import MagenticOneGroupChat
async def with_termination():
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agents = [
AssistantAgent("agent1", model_client=model_client),
AssistantAgent("agent2", model_client=model_client)
]
# Stop after 50 messages or when orchestrator decides task is complete
termination = MaxMessageTermination(max_messages=50)
team = MagenticOneGroupChat(
participants=agents,
model_client=model_client,
termination_condition=termination,
max_turns=25,
max_stalls=3
)
result = await team.run(task="Solve this complex problem...")
Custom Final Answer Prompt
from autogen_agentchat.teams import MagenticOneGroupChat
CUSTOM_FINAL_PROMPT = """
The team has completed working on this task:
{task}
Based on the conversation above, provide a concise summary of:
1. What was accomplished
2. Key findings or results
3. Any remaining issues or recommendations
Format the response in markdown.
"""
async def custom_final_answer():
team = MagenticOneGroupChat(
participants=[agent1, agent2],
model_client=model_client,
final_answer_prompt=CUSTOM_FINAL_PROMPT
)
result = await team.run(task="Analyze this data...")
# Final message uses custom prompt format
Streaming with Events
from autogen_agentchat.teams import MagenticOneGroupChat
from autogen_agentchat.base import TaskResult
from autogen_agentchat.messages import BaseChatMessage
async def streaming_example():
team = MagenticOneGroupChat(
participants=[agent1, agent2, agent3],
model_client=model_client,
emit_team_events=True # Enable team event streaming
)
async for event in team.run_stream(task="Complex multi-step task"):
if isinstance(event, TaskResult):
print(f"Final result: {event.stop_reason}")
elif isinstance(event, BaseChatMessage):
print(f"[{event.source}]: {event.content}")
else:
print(f"Event: {type(event).__name__}")
State Persistence
from autogen_agentchat.teams import MagenticOneGroupChat
from autogen_agentchat.state import MagenticOneOrchestratorState
async def state_persistence():
team = MagenticOneGroupChat(
participants=[agent1, agent2],
model_client=model_client
)
# Run for a few turns
async for event in team.run_stream(task="Long task..."):
if some_condition:
# Save state
state = await team.save_state()
# Later, restore and continue
new_team = MagenticOneGroupChat(
participants=[agent1, agent2],
model_client=model_client
)
await new_team.load_state(state)
# Continue execution
result = await new_team.run(task="Continue task...")
break
Error Handling
async def error_handling():
team = MagenticOneGroupChat(
participants=[agent1, agent2],
model_client=model_client,
max_stalls=3
)
try:
result = await team.run(task="Difficult task")
if "stalled" in result.stop_reason.lower():
print("Team stalled after 3 attempts at replanning")
elif "error" in result.stop_reason.lower():
print("Team encountered an error")
else:
print("Task completed successfully")
except Exception as e:
print(f"Team execution failed: {e}")
finally:
await team.close()
Configuration Serialization
from autogen_agentchat.teams import MagenticOneGroupChat
async def config_serialization():
# Create team
team = MagenticOneGroupChat(
participants=[agent1, agent2],
model_client=model_client,
max_turns=25,
max_stalls=4,
name="ResearchTeam"
)
# Serialize configuration
config = team.dump_component()
# Later, recreate from config
restored_team = MagenticOneGroupChat.load_component(config)
# Team has same configuration
assert restored_team.name == "ResearchTeam"
Related Pages
- Microsoft_Autogen_MagenticOne_Orchestrator - Internal orchestrator implementation
- Microsoft_Autogen_MagenticOne_Prompts - Prompts used by the orchestrator
- Microsoft_Autogen_BaseGroupChat - Base class for group chat teams
- Microsoft_Autogen_ChatAgent_Protocol - Protocol for participant agents
- Microsoft_Autogen_AssistantAgent - Common participant agent type
- Microsoft_Autogen_ChatAgentContainer - Container for participant agents
- Microsoft_Autogen_MagenticOneOrchestratorState - State model for orchestrator
- Microsoft_Autogen_RoundRobinGroupChat - Simpler round-robin group chat
- Microsoft_Autogen_SelectorGroupChat - LLM-based selector group chat
- Microsoft_Autogen_TerminationCondition - Conditions for stopping execution
References
If you use MagenticOneGroupChat in your work, please cite:
@article{fourney2024magentic,
title={Magentic-one: A generalist multi-agent system for solving complex tasks},
author={Fourney, Adam and Bansal, Gagan and Mozannar, Hussein and Tan, Cheng and Salinas, Eduardo and Niedtner, Friederike and Proebsting, Grace and Bassman, Griffin and Gerrits, Jack and Alber, Jacob and others},
journal={arXiv preprint arXiv:2411.04468},
year={2024}
}