Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Microsoft Autogen TaskResult Console

From Leeroopedia
Knowledge Sources
Domains Multi-Agent Systems, Result Collection, Stream Rendering, Workflow Output, Observability
Last Updated 2026-02-11 00:00 GMT

Overview

Concrete tools for collecting structured workflow results and rendering execution streams to the console, provided by Microsoft AutoGen.

Description

This implementation covers two complementary components:

TaskResult is a Pydantic model that captures the output of a team execution. It contains the ordered sequence of messages produced during execution and an optional stop reason string. In graph workflows, the messages appear in graph execution order: start nodes first, then sequential successors, parallel branches in completion order, and loop iterations in temporal order. Streaming chunk events are excluded from the message list.

Console is an asynchronous function that consumes the stream produced by run_stream() and renders each event to stdout in a human-readable format. It displays:

  • Event type and source agent headers for each message.
  • Text content of each message, with support for multi-modal rendering (inline images in iTerm2).
  • Streaming chunks displayed incrementally as they arrive.
  • Optional statistics including message count, stop reason, token usage (prompt and completion), and execution duration.

Console returns the final TaskResult (or Response for single-agent streams) after consuming the entire stream, making it suitable as both a rendering and collection utility.

Usage

Use TaskResult whenever you need to inspect the output of a run() or run_stream() call programmatically. Access result.messages for the full message history and result.stop_reason for termination details.

Use Console when you want to render the execution stream to stdout during development, debugging, or interactive use. It can wrap any run_stream() call and provides the TaskResult as its return value.

Code Reference

Source Location

  • Repository: Microsoft AutoGen
  • File (TaskResult): python/packages/autogen-agentchat/src/autogen_agentchat/base/_task.py (Lines 9-16)
  • File (Console): python/packages/autogen-agentchat/src/autogen_agentchat/ui/_console.py (Lines 82-88)

Signature

# TaskResult
class TaskResult(BaseModel):
    messages: Sequence[SerializeAsAny[BaseAgentEvent | BaseChatMessage]]
    stop_reason: str | None = None

# Console
async def Console(
    stream: AsyncGenerator[BaseAgentEvent | BaseChatMessage | T, None],
    *,
    no_inline_images: bool = False,
    output_stats: bool = False,
    user_input_manager: UserInputManager | None = None,
) -> T:

Import

from autogen_agentchat.base import TaskResult
from autogen_agentchat.ui import Console

I/O Contract

TaskResult Fields

Name Type Required Description
messages Sequence[BaseAgentEvent or BaseChatMessage] Yes The ordered sequence of messages produced during execution. In graph workflows, ordered by graph execution topology. Streaming chunks are excluded.
stop_reason str or None No The reason execution terminated. For graph workflows, "Digraph execution is complete" on natural completion. None if not set.

Console Inputs

Name Type Required Description
stream AsyncGenerator[BaseAgentEvent or BaseChatMessage or TaskResult, None] Yes The message stream from run_stream() or on_messages_stream().
no_inline_images bool No If True, disables inline image rendering in iTerm2. Defaults to False.
output_stats bool No If True, outputs summary statistics (message count, stop reason, token usage, duration) at the end. Defaults to False. Experimental.
user_input_manager UserInputManager or None No Manager for handling user input events when using UserProxyAgent. Defaults to None.

Console Output

Name Type Description
result TaskResult or Response The last TaskResult (from team run_stream) or Response (from single-agent on_messages_stream) processed from the stream. Raises ValueError if the stream contains no result.

Usage Examples

Basic Example: Collecting TaskResult from run()

import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.teams import DiGraphBuilder, GraphFlow
from autogen_ext.models.openai import OpenAIChatCompletionClient


async def main():
    model_client = OpenAIChatCompletionClient(model="gpt-4.1-nano")

    agent_a = AssistantAgent("A", model_client=model_client,
                              system_message="You are a helpful assistant.")
    agent_b = AssistantAgent("B", model_client=model_client,
                              system_message="Translate input to French.")

    builder = DiGraphBuilder()
    builder.add_node(agent_a).add_node(agent_b)
    builder.add_edge(agent_a, agent_b)
    graph = builder.build()

    team = GraphFlow(
        participants=builder.get_participants(),
        graph=graph,
        termination_condition=MaxMessageTermination(5),
    )

    # run() returns TaskResult directly
    result = await team.run(task="Tell me a joke.")

    # Inspect the result
    print(f"Stop reason: {result.stop_reason}")
    print(f"Number of messages: {len(result.messages)}")
    for msg in result.messages:
        print(f"  [{msg.source}]: {msg.to_model_text()[:80]}")


asyncio.run(main())

Using Console for Stream Rendering

import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.teams import DiGraphBuilder, GraphFlow
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient


async def main():
    model_client = OpenAIChatCompletionClient(model="gpt-4.1-nano")

    agent_a = AssistantAgent("A", model_client=model_client,
                              system_message="You are a helpful assistant.")
    agent_b = AssistantAgent("B", model_client=model_client,
                              system_message="Translate input to Chinese.")
    agent_c = AssistantAgent("C", model_client=model_client,
                              system_message="Translate input to English.")

    builder = DiGraphBuilder()
    builder.add_node(agent_a).add_node(agent_b).add_node(agent_c)
    builder.add_edge(agent_a, agent_b).add_edge(agent_b, agent_c)
    graph = builder.build()

    team = GraphFlow(
        participants=builder.get_participants(),
        graph=graph,
        termination_condition=MaxMessageTermination(5),
    )

    # Console renders the stream to stdout and returns TaskResult
    result = await Console(
        team.run_stream(task="Write a short story about a cat."),
        output_stats=True,
    )

    # Console output example:
    # ---------- TextMessage (A) ----------
    # Once upon a time...
    # ---------- TextMessage (B) ----------
    # ...Chinese translation...
    # ---------- TextMessage (C) ----------
    # ...English translation...
    # ---------- Summary ----------
    # Number of messages: 4
    # Finish reason: Digraph execution is complete
    # Total prompt tokens: 150
    # Total completion tokens: 200
    # Duration: 3.45 seconds

    print(f"\nFinal stop reason: {result.stop_reason}")


asyncio.run(main())

Console with Conditional Graph

import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_agentchat.teams import DiGraphBuilder, GraphFlow
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient


async def main():
    model_client = OpenAIChatCompletionClient(model="gpt-4.1-nano")

    classifier = AssistantAgent(
        "classifier", model_client=model_client,
        system_message="Detect if input is Chinese. Say 'yes' or 'no' only.",
    )
    cn_to_en = AssistantAgent("cn_to_en", model_client=model_client,
                               system_message="Translate Chinese to English.")
    en_to_cn = AssistantAgent("en_to_cn", model_client=model_client,
                               system_message="Translate English to Chinese.")

    builder = DiGraphBuilder()
    builder.add_node(classifier).add_node(cn_to_en).add_node(en_to_cn)
    builder.add_edge(classifier, cn_to_en,
                     condition=lambda msg: "yes" in msg.to_model_text())
    builder.add_edge(classifier, en_to_cn,
                     condition=lambda msg: "yes" not in msg.to_model_text())
    graph = builder.build()

    team = GraphFlow(
        participants=builder.get_participants(),
        graph=graph,
        termination_condition=MaxMessageTermination(5),
    )

    # Console renders each branch taken
    result = await Console(team.run_stream(task="AutoGen is a framework for AI agents."))
    print(f"Stop reason: {result.stop_reason}")


asyncio.run(main())

Related Pages

Implements Principle

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment