Implementation:Microsoft Autogen SelectorGroupChat Init
| Knowledge Sources | |
|---|---|
| Domains | Multi-Agent Systems, Orchestration, LLM Routing, Dynamic Speaker Selection, AI Agents |
| Last Updated | 2026-02-11 00:00 GMT |
Overview
Concrete tool for assembling agents into a dynamic LLM-routed team where a model selects the next speaker based on conversation context, provided by Microsoft AutoGen.
Description
SelectorGroupChat is an orchestration class that uses an LLM to decide which agent speaks next at each turn. It requires at least two participants and a model client dedicated to the selection task (which can be the same or different from the agents' model clients).
During each turn, the selector manager:
- Optionally narrows the candidate list using
candidate_func. - Optionally delegates entirely to
selector_funcfor programmatic selection. - Otherwise, formats the
selector_promptwith participant role descriptions and conversation history, sends it to the selectormodel_client, and parses the response to identify the next speaker. - If the LLM returns an invalid name, retries up to
max_selector_attemptstimes. - If
allow_repeated_speakeris False, removes the last speaker from the candidate list before selection.
The class validates at initialization that at least two participants are provided. The selector prompt uses three template variables: {roles} (formatted name-description pairs), {participants} (list of names), and {history} (conversation transcript).
Usage
Import SelectorGroupChat from autogen_agentchat.teams. Instantiate with a list of agent participants, a model client for selection, and optionally customize the selector prompt, allow/disallow repeated speakers, or provide custom selector/candidate functions. Then call run() or run_stream().
Code Reference
Source Location
- Repository: Microsoft AutoGen
- File:
python/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_selector_group_chat.py(lines 597-646)
Signature
class SelectorGroupChat:
def __init__(
self,
participants: List[ChatAgent | Team],
model_client: ChatCompletionClient,
*,
name: str | None = None,
description: str | None = None,
termination_condition: TerminationCondition | None = None,
max_turns: int | None = None,
runtime: AgentRuntime | None = None,
selector_prompt: str = """You are in a role play game. The following roles are available:
{roles}.
Read the following conversation. Then select the next role from {participants} to play. Only return the role.
{history}
Read the above conversation. Then select the next role from {participants} to play. Only return the role.
""",
allow_repeated_speaker: bool = False,
max_selector_attempts: int = 3,
selector_func: Optional[SelectorFuncType] = None,
candidate_func: Optional[CandidateFuncType] = None,
custom_message_types: List[type[BaseAgentEvent | BaseChatMessage]] | None = None,
emit_team_events: bool = False,
model_client_streaming: bool = False,
model_context: ChatCompletionContext | None = None,
):
Import
from autogen_agentchat.teams import SelectorGroupChat
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| participants | List[ChatAgent or Team] | Yes | At least two agents or nested teams. Their names and descriptions are used by the selector to make speaker choices. |
| model_client | ChatCompletionClient | Yes | LLM client used for speaker selection. Can be the same or different model from the agents' clients. A fast, cheap model is often sufficient. |
| name | str or None | No | Name for this team instance. Defaults to "SelectorGroupChat". |
| description | str or None | No | Human-readable description. Defaults to "A team of agents." |
| termination_condition | TerminationCondition or None | No | Composable condition to stop the conversation. |
| max_turns | int or None | No | Hard upper limit on conversation turns. |
| runtime | AgentRuntime or None | No | Custom runtime for agent registration. Defaults to SingleThreadedAgentRuntime. |
| selector_prompt | str | No | Template for the selection LLM prompt. Must contain {roles}, {participants}, and {history} placeholders. Defaults to a standard role-play selection prompt. |
| allow_repeated_speaker | bool | No | Whether the same agent can be selected in consecutive turns. Defaults to False. |
| max_selector_attempts | int | No | Maximum retries if the LLM returns an invalid agent name. Defaults to 3. |
| selector_func | SelectorFuncType or None | No | Custom function for fully programmatic speaker selection. When provided, bypasses the LLM selector entirely. Signature: (Sequence[BaseAgentEvent or BaseChatMessage]) -> str or None. |
| candidate_func | CandidateFuncType or None | No | Custom function that narrows the candidate list before selection. Signature: (Sequence[BaseAgentEvent or BaseChatMessage]) -> List[str]. |
| custom_message_types | List[type] or None | No | Additional message types that participants may produce. |
| emit_team_events | bool | No | Whether to surface internal orchestration events in the stream. Defaults to False. |
| model_client_streaming | bool | No | Whether to use streaming for the selector LLM calls. Defaults to False. |
| model_context | ChatCompletionContext or None | No | Context manager for the selector model's conversation history. |
Outputs
| Name | Type | Description |
|---|---|---|
| instance | SelectorGroupChat | A configured team instance. Call run() for a final TaskResult or run_stream() for an async stream of messages and the final TaskResult. |
Usage Examples
Basic Example
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import TextMentionTermination
from autogen_agentchat.teams import SelectorGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main():
model_client = OpenAIChatCompletionClient(model="gpt-4o")
planner = AssistantAgent(
name="planner",
model_client=model_client,
description="Plans the approach to solving a task.",
system_message="You are a planner. Break down tasks into steps. Say TERMINATE when done.",
)
coder = AssistantAgent(
name="coder",
model_client=model_client,
description="Writes Python code to implement plans.",
system_message="You write Python code based on plans. Focus only on implementation.",
)
reviewer = AssistantAgent(
name="reviewer",
model_client=model_client,
description="Reviews code for correctness and suggests improvements.",
system_message="You review code for bugs and style issues.",
)
team = SelectorGroupChat(
participants=[planner, coder, reviewer],
model_client=model_client,
termination_condition=TextMentionTermination("TERMINATE"),
max_turns=10,
)
result = await team.run(task="Create a Python function to compute Fibonacci numbers.")
print(result)
asyncio.run(main())
With Custom Selector Function
import asyncio
from typing import Sequence
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.messages import BaseAgentEvent, BaseChatMessage
from autogen_agentchat.teams import SelectorGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main():
model_client = OpenAIChatCompletionClient(model="gpt-4o")
writer = AssistantAgent(name="writer", model_client=model_client, description="Writes content.")
editor = AssistantAgent(name="editor", model_client=model_client, description="Edits content.")
critic = AssistantAgent(name="critic", model_client=model_client, description="Critiques content.")
# Custom selector: alternate between writer and editor, with critic every 3rd turn
def my_selector(messages: Sequence[BaseAgentEvent | BaseChatMessage]) -> str | None:
turn = len(messages)
if turn % 3 == 0:
return "critic"
elif turn % 3 == 1:
return "writer"
else:
return "editor"
team = SelectorGroupChat(
participants=[writer, editor, critic],
model_client=model_client,
selector_func=my_selector,
termination_condition=MaxMessageTermination(9),
)
result = await team.run(task="Write a product description for a smart watch.")
print(result)
asyncio.run(main())