Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Microsoft Autogen Tool Augmented Agent

From Leeroopedia
Knowledge Sources
Domains Tool Use, LLM Agents, Function Calling, Agent Configuration
Last Updated 2026-02-11 00:00 GMT

Overview

A tool-augmented agent is an LLM-powered agent that has been configured with a set of callable tools, enabling it to perform actions beyond text generation by invoking external functions, APIs, or sub-agents through structured tool calling.

Description

Standard LLM agents can only produce text. A tool-augmented agent extends this by equipping the agent with tools that the LLM can invoke during a conversation. When the LLM determines that a tool is needed to answer a question or complete a task, it emits a structured tool call request instead of (or in addition to) text. The agent framework intercepts this request, executes the tool, and feeds the result back to the LLM for further reasoning.

Configuring a tool-augmented agent involves several decisions:

  • Tool source: Tools can be provided directly as a list of tool objects or callables, or they can be provided via one or more workbenches that manage tool lifecycle and discovery. These two approaches are mutually exclusive -- an agent uses either a direct tool list or workbenches, but not both.
  • Reflection on tool use: After a tool returns its result, the agent can optionally send the result back to the LLM for a reflection step. In this step, the LLM reviews the tool output and produces a natural-language summary or follow-up. Without reflection, the agent returns the raw tool output as the response. Reflection is useful when the tool result needs interpretation or when the agent should decide whether to call additional tools.
  • Tool iteration limits: The agent can be configured with a maximum number of tool call iterations. If set to 1 (the default), the agent performs a single round of tool calls. If set higher, the agent enters a loop where after each round of tool calls, it sends the results back to the LLM, which may request additional tool calls. The loop terminates when the LLM produces a text response or the iteration limit is reached.
  • Tool call summary format: When the agent does not reflect on tool use, it summarizes tool results using a configurable format string. This controls how tool outputs are presented as the agent's final response.
  • Callable auto-wrapping: Plain Python callables (functions) can be passed directly as tools. The framework automatically wraps them using a function tool definition, extracting the docstring as the tool description and the type annotations as the parameter schema.

Usage

Configure a tool-augmented agent when:

  • The agent needs to perform actions that an LLM cannot do alone (web searches, database queries, calculations, file operations, API calls).
  • You want the LLM to decide which tools to use and when, based on the conversation context.
  • You need multi-step tool use where the output of one tool informs the next tool call.
  • You want the agent to summarize or interpret tool results before presenting them to the user.
  • You are building agentic workflows where an agent acts as an orchestrator that delegates subtasks to specialized tools.

Theoretical Basis

Tool-augmented agents implement the ReAct (Reasoning + Acting) paradigm, where the LLM alternates between reasoning about the task and acting by invoking tools. The configuration determines the agent's tool-use behavior:

FUNCTION configure_tool_augmented_agent(tools_or_workbench, reflect, max_iterations, summary_format):
    1. Validate tool source:
       - If tools list provided: wrap callables as FunctionTools, store in static workbench
       - If workbench provided: ensure no direct tools, store workbench reference
       - Both provided: raise error (mutually exclusive)
    2. Validate model capability:
       - Ensure the model client supports function calling
       - If not, raise error (tools require function calling)
    3. Validate tool uniqueness:
       - Ensure all tool names are unique across tools and handoffs
    4. Configure reflection:
       - If reflect is True: after tool execution, send results back to LLM for interpretation
       - If reflect is False: use summary_format to format tool results as final response
       - If reflect is None and structured output is required: default to True
    5. Configure iteration limit:
       - Store max_iterations (must be >= 1)
       - This controls the maximum rounds of tool call -> result -> next LLM call
    6. Store summary format string for non-reflection mode
    7. Return configured agent

The key architectural decision is the separation between tool provisioning (where tools come from) and tool execution policy (how many iterations, whether to reflect). This allows the same tools to be used with different execution strategies depending on the use case.

The mutual exclusivity of tools and workbenches enforces a clean ownership model. When tools are provided directly, the agent wraps them in an internal workbench. When workbenches are provided, the agent delegates all tool management to them. This prevents ambiguity about which tools are available.

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment