Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Workflow:Microsoft Autogen Multi Agent Conversation

From Leeroopedia
Knowledge Sources
Domains Multi_Agent_Systems, LLM_Orchestration, Conversational_AI
Last Updated 2026-02-11 18:00 GMT

Overview

End-to-end process for orchestrating multi-agent conversations where specialized agents collaborate through structured turn-taking patterns (round-robin or LLM-driven selection) to solve complex tasks.

Description

This workflow demonstrates how to build a multi-agent team where multiple LLM-powered agents collaborate on a shared task through structured conversation. The framework supports two primary orchestration strategies: RoundRobinGroupChat for deterministic sequential turn-taking, and SelectorGroupChat for dynamic LLM-driven speaker selection based on conversation context.

The process covers defining specialized agents with distinct system prompts, configuring a model client, assembling them into a team with appropriate termination conditions, running the team on a task, and consuming the streamed results. Each agent contributes its expertise in sequence or as selected, building on previous messages to produce a comprehensive response.

Usage

Execute this workflow when you need multiple specialized agents to collaborate on a task that benefits from diverse perspectives or domain expertise. Typical scenarios include:

  • A planning task requiring research, analysis, and summarization from different angles
  • A problem-solving task where agents check each other's work
  • Any multi-step reasoning task where sequential refinement produces better results than a single agent

You have an OpenAI-compatible model client configured, and you want agents to take turns contributing to a shared conversation thread.

Execution Steps

Step 1: Define Model Client

Configure the LLM client that agents will use for inference. This involves selecting a model provider (OpenAI, Azure, or compatible endpoint) and specifying model parameters such as model name and optional settings. The model client serves as the shared inference backend for all agents in the team.

Key considerations:

  • Choose a model capable of instruction following and role-playing
  • The same or different model clients can be assigned to different agents
  • For SelectorGroupChat, an additional model client is needed for the speaker selection mechanism

Step 2: Create Specialized Agents

Instantiate multiple AssistantAgent instances, each with a unique name, description, and system message that defines its expertise. The description is used by SelectorGroupChat to determine which agent should speak next. The system message shapes the agent's behavior and output style.

Key considerations:

  • Each agent name must be unique within the team
  • Agent descriptions should clearly differentiate their roles for selector-based routing
  • System messages should instruct agents on their specific contribution style
  • Optionally equip agents with tools or workbenches for additional capabilities

Step 3: Configure Termination Conditions

Define when the conversation should end by composing termination conditions. Conditions can be combined with logical OR and AND operators to create nuanced stopping criteria.

Common conditions:

  • TextMentionTermination detects a keyword like "TERMINATE" in agent output
  • MaxMessageTermination caps the total number of messages
  • TokenUsageTermination limits total token consumption
  • TimeoutTermination enforces a wall-clock time limit

Step 4: Assemble the Team

Combine agents, termination conditions, and orchestration strategy into a team. For RoundRobinGroupChat, agents speak in the order they are listed. For SelectorGroupChat, an LLM evaluates the conversation and selects the most appropriate next speaker, with optional custom selector and candidate filter functions.

Key considerations:

  • RoundRobinGroupChat is deterministic and requires no additional model client
  • SelectorGroupChat requires a model_client parameter for the selection mechanism
  • The selector_prompt template can be customized with {roles}, {participants}, and {history} placeholders
  • allow_repeated_speaker controls whether the same agent can speak consecutively

Step 5: Run the Team on a Task

Submit a task to the team and consume the results. The task can be a string, a message object, or a sequence of messages. The team processes the conversation through its orchestration loop until a termination condition is met or max_turns is reached.

What happens:

  • The task is broadcast to all agents as the initial message
  • The orchestration manager selects speakers according to the team pattern
  • Each selected agent receives the full message thread and produces a response
  • Messages are appended to the shared thread
  • The process continues until termination

Step 6: Consume and Display Results

Process the team output, which can be consumed as a blocking TaskResult or as an async stream of individual messages and events. The Console utility provides formatted terminal output with agent names, message types, and optional token usage statistics.

Output includes:

  • Individual agent messages with source attribution
  • Internal events such as tool calls, streaming chunks, and speaker selections
  • Final TaskResult containing the complete message history and stop reason

Execution Diagram

GitHub URL

Workflow Repository