Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Anthropics Anthropic sdk python Tool Execution

From Leeroopedia
Knowledge Sources
Domains Tool_Use, LLM, Function_Calling
Last Updated 2026-02-15 00:00 GMT

Overview

Tool Execution is the step where a tool function is actually invoked with the arguments the model requested. After detecting a tool_use block in the model's response, the application must validate the model-provided input, call the corresponding function, and capture the result (or error) for subsequent submission back to the model. The Anthropic Python SDK provides structured execution through BetaFunctionTool.call(), which handles input validation and dispatch automatically.

Theory: Invoking Registered Tool Functions

The execution step bridges the LLM's symbolic tool call (a JSON object with a name and input dict) and the actual side-effecting Python function. This bridge must handle several concerns:

  1. Argument unpacking: The model provides arguments as a flat Dict[str, object]; these must be unpacked into the function's keyword arguments
  2. Type coercion: JSON types (string, number, boolean, array, object) must map correctly to Python types (including enums, nested models, etc.)
  3. Validation: Arguments must conform to the function's type annotations before execution begins
  4. Error containment: If the function raises an exception, the error must be captured and reported back to the model rather than crashing the application

Input Validation Using Pydantic

The SDK wraps each decorated tool function with pydantic.validate_call at registration time. When .call() is invoked, the input dict is unpacked as keyword arguments to this validated wrapper:

# Internal flow inside BetaFunctionTool.call():
# 1. Verify input is a dict
# 2. Unpack: self._func_with_validate(**input)
# 3. Pydantic validates and coerces types
# 4. If validation fails, raise ValueError
# 5. If validation passes, execute the original function

This means the function never sees invalid input. Common validation catches include:

  • Missing required parameters
  • Wrong types (e.g., string where int was expected)
  • Values outside enum constraints
  • Unexpected extra parameters

If validation fails, the SDK raises a ValueError with the message "Invalid arguments for function {name}", chaining the underlying pydantic.ValidationError for debugging.

Sync vs. Async Execution

The SDK provides two parallel class hierarchies for tool execution:

Class Decorator Execution Model Call Signature
BetaFunctionTool @beta_tool Synchronous .call(input) -> BetaFunctionToolResultType
BetaAsyncFunctionTool @beta_async_tool Asynchronous await .call(input) -> BetaFunctionToolResultType

Each class enforces its execution model: calling a sync tool that wraps a coroutine raises RuntimeError("Cannot call a coroutine function synchronously"), and vice versa.

The result type is defined as:

BetaFunctionToolResultType = Union[str, Iterable[BetaContent]]

This allows tools to return either a simple string or a structured content array (with text blocks, image blocks, etc.) for rich responses.

Error Handling Strategy

Tool execution errors are handled at two levels:

Level 1 -- Validation errors: Raised as ValueError before the function body runs. These indicate the model produced invalid arguments.

Level 2 -- Runtime errors: Any exception raised during function execution. In the manual loop, these should be caught and converted to error result blocks:

try:
    result = tool.call(block.input)
    tool_results.append({
        "type": "tool_result",
        "tool_use_id": block.id,
        "content": str(result),
    })
except Exception as e:
    tool_results.append({
        "type": "tool_result",
        "tool_use_id": block.id,
        "content": repr(e),
        "is_error": True,
    })

The automated BetaToolRunner handles this pattern internally, catching exceptions, logging them, and submitting error results with is_error: True.

Design Considerations

  • Idempotency: Tools should ideally be idempotent or at least safe to retry, since the model may request the same call again if it receives an error result
  • Timeout management: Long-running tools should implement their own timeouts to avoid blocking the conversation loop indefinitely
  • Side effects: Tool execution is where real-world actions happen (API calls, database writes, file operations); careful access control is essential
  • Result formatting: The model interprets tool results as text; returning well-formatted, concise results helps the model generate better final responses

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment