Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Microsoft Autogen Graph Execution

From Leeroopedia
Knowledge Sources
Domains Multi-Agent Systems, Graph Execution, Concurrent Processing, Workflow Runtime, Streaming
Last Updated 2026-02-11 00:00 GMT

Overview

Graph execution is the runtime process of traversing a directed graph to determine agent invocation order, evaluating edge conditions against agent outputs, and streaming execution events as they occur.

Description

Once a graph-directed team is configured with agents and a validated directed graph, execution proceeds by traversing the graph at runtime. The execution engine maintains a ready queue of nodes that are eligible to run and a remaining count map that tracks how many incoming edges each node still needs before it becomes ready.

Graph execution supports four distinct patterns, each with different runtime behavior:

Sequential Execution

In the simplest case, nodes form a linear chain (A -> B -> C). Each node executes after its predecessor completes, and the ready queue contains at most one node at a time. This is equivalent to a pipeline.

Parallel Fan-Out

When a node has multiple outgoing unconditional edges (A -> B, A -> C), all target nodes become ready simultaneously after the source completes. The execution engine runs these nodes concurrently in the same turn. Both B and C execute in parallel and their outputs appear in the stream as they complete.

Conditional Branching

When a node has conditional outgoing edges, the execution engine evaluates each edge's condition against the source node's output message. Only edges whose conditions are satisfied make their targets ready. This enables routing: for example, sending the conversation to a translator agent if the input is in a foreign language, or to a different handler otherwise.

Two types of conditions are supported:

  • String conditions: The edge is traversed if the condition string appears as a substring of the last message's text content.
  • Callable conditions: The edge is traversed if the callable returns True when given the last message object.

A critical constraint is that within a single node, all outgoing edges must be uniformly conditional or unconditional. Mixing is prohibited.

Cyclic Loops

When edges form cycles (A -> B -> A), the execution engine supports iterative patterns such as review loops. Each time a node in the cycle completes, its outgoing edges are evaluated, and if a loop-back edge's condition is satisfied, the target node is re-queued.

For nodes reachable via both the initial entry path and a loop-back path, activation groups distinguish these two dependency channels. A loop-back edge uses a separate activation group so that re-entry does not require satisfaction of the original entry edge.

Cycle safety is enforced at configuration time: every cycle must have at least one conditional edge, and cyclic graphs must have either a termination condition or a maximum turn limit.

Execution Termination

Graph execution terminates under three conditions:

  1. Natural completion: The ready queue is empty because all leaf nodes have been reached and there are no more edges to traverse.
  2. Termination condition: A user-supplied termination condition (e.g., maximum message count, content match) is met.
  3. Maximum turns: The configured turn limit is reached.

On natural completion, the execution engine sends a stop message and resets internal state, making the team ready for a subsequent run.

Usage

Use graph execution (via streaming) when:

  • You need real-time visibility into which agents are executing and what they produce, as the workflow progresses.
  • You want to render agent outputs to a user interface as they become available, rather than waiting for the entire workflow to finish.
  • You need to support cancellation of long-running workflows via cancellation tokens.
  • You are building interactive systems where users observe agent collaboration in real time.
  • You need the full sequence of events for logging, debugging, or audit purposes.

Theoretical Basis

Ready Queue and Activation Tracking

The execution engine uses a data structure analogous to Kahn's algorithm for topological sorting, extended to support conditional edges and cycles:

  1. Initialize the ready queue with all start nodes (nodes with no incoming edges, or the designated entry point).
  2. Initialize a remaining count for each node, tracking how many incoming edge activations are still needed.
  3. In each turn, drain all nodes from the ready queue and execute them concurrently.
  4. After each node completes, evaluate its outgoing edges against the output message.
  5. For each satisfied edge, decrement the target's remaining count in the appropriate activation group.
  6. If the remaining count reaches zero (for "all" mode) or any edge is satisfied (for "any" mode), add the target to the ready queue.
  7. If the graph has cycles, reset the remaining count for a node's triggered activation groups when that node is dequeued, allowing it to be re-entered later.
  8. Repeat until the ready queue is empty or a termination condition is met.

Streaming Execution Model

Graph execution uses an asynchronous generator pattern. The stream yields three types of items:

  • BaseAgentEvent: Internal events from agent execution (e.g., tool calls, intermediate results).
  • BaseChatMessage: Agent output messages that form the conversation history.
  • TaskResult: The final result, always the last item in the stream, containing the full message history and stop reason.

Pseudocode: Graph Execution with Streaming

async function run_stream(graph, participants, task):
    ready_queue = graph.start_nodes()
    remaining = compute_incoming_edge_counts(graph)

    publish task message to all participants
    yield task message

    while ready_queue is not empty:
        speakers = drain(ready_queue)

        execute all speakers concurrently:
            for each speaker output message:
                yield message events

                # Evaluate outgoing edges
                for each edge from speaker:
                    if edge.condition is satisfied by output:
                        group = edge.activation_group
                        if activation[target][group] == "all":
                            remaining[target][group] -= 1
                            if remaining[target][group] == 0:
                                ready_queue.add(target)
                        else:  # "any"
                            if not already_enqueued[target][group]:
                                ready_queue.add(target)

        # Reset activation state for dequeued nodes
        for each speaker in speakers:
            reset triggered activation groups for speaker

        # Check termination
        if termination_condition is met:
            break

    yield TaskResult(messages, stop_reason)

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment