Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Workflow:Anthropics Anthropic sdk python Tool Use Integration

From Leeroopedia
Knowledge Sources
Domains LLMs, Tool_Use, Function_Calling, Agentic_AI
Last Updated 2026-02-15 12:00 GMT

Overview

End-to-end process for enabling Claude to call external functions (tools) using both manual tool result handling and the automated tool runner loop in the Anthropic Python SDK.

Description

This workflow covers two approaches to tool use with Claude. The manual approach involves defining tool schemas, sending them with the message request, detecting tool_use content blocks in the response, executing the corresponding functions locally, and sending tool results back to the API in a follow-up message. The automated approach uses the @beta_tool decorator to convert Python functions into tool definitions automatically, and the BetaToolRunner to handle the entire call-execute-respond loop without manual intervention. Both approaches support streaming, async execution, and context compaction for long-running tool conversations.

Usage

Execute this workflow when Claude needs to interact with external systems (databases, APIs, file systems), perform calculations, retrieve real-time data, or take actions in the real world. Use the manual approach when you need full control over tool execution, and the automated runner when you want simplified orchestration of multi-step tool interactions.

Execution Steps

Step 1: Tool Definition

Define the tools available to Claude. In the manual approach, create ToolParam dictionaries with name, description, and JSON Schema input_schema. In the automated approach, decorate Python functions with @beta_tool, which automatically generates the tool schema from the function signature and docstring.

Key considerations:

  • Manual tools use JSON Schema to describe input parameters
  • The @beta_tool decorator extracts parameter types, descriptions (from docstring Args section), and return types automatically
  • Tool descriptions should clearly explain what the tool does and when to use it
  • The decorator creates BetaFunctionTool objects that can both describe themselves (to_dict()) and execute (call())

Step 2: Message Request with Tools

Send the message request to the API with the tools parameter populated. Claude analyzes the conversation and decides whether to call any tools. The tool_choice parameter can control this behavior: "auto" (default), "any" (force tool use), "none" (disable), or a specific tool name.

Key considerations:

  • Pass tool definitions in the tools parameter of create() or stream()
  • The model evaluates the user's request and available tools to decide which (if any) to call
  • Multiple tools can be defined simultaneously; Claude picks the appropriate one
  • Server-side tools (web_search, code_execution) are defined by type string rather than custom schema

Step 3: Tool Call Detection

Examine the API response for tool_use content blocks. When the model decides to call a tool, the response stop_reason is "tool_use" and the content includes ToolUseBlock objects containing the tool name, a unique tool_use_id, and the input parameters.

Key considerations:

  • In the manual approach, check stop_reason == "tool_use" or iterate content blocks for type == "tool_use"
  • Each tool use block has a unique id that must be referenced in the result
  • The input field contains the structured arguments matching the tool's input_schema
  • Multiple tool calls can appear in a single response

Step 4: Tool Execution

Execute the corresponding function with the provided input parameters. In the manual approach, this is explicit function dispatch based on the tool name. In the automated approach, the BetaToolRunner handles this automatically by matching tool names to decorated functions and calling them.

Key considerations:

  • Manual: Map tool name to function, extract inputs, call function, capture result
  • Automated: The runner's generate_tool_call_response() method handles all execution
  • Tool functions can return strings or structured content blocks (BetaFunctionToolResultType)
  • Errors during execution should be caught and returned as error results

Step 5: Tool Result Submission

Send the tool execution results back to Claude in a follow-up message. The tool result must reference the original tool_use_id. Claude then incorporates the result into its reasoning and generates a final response (or calls additional tools).

Key considerations:

  • Tool results use the ToolResultBlockParam format with tool_use_id, content, and optional is_error flag
  • The conversation history must include: original user message, assistant response with tool_use, and user message with tool_result
  • In the automated runner, this loop continues automatically until no more tool calls are needed or max_iterations is reached

Step 6: Iteration and Compaction

For complex tasks requiring multiple tool calls, the loop repeats (steps 2-5). The automated BetaToolRunner supports compaction_control to automatically summarize conversation history when token usage exceeds a threshold, preventing context window overflow in long-running tool sessions.

Key considerations:

  • The runner iterates: API call, tool execution, result submission, repeat
  • Compaction activates when input tokens exceed the configured threshold (default 100,000)
  • Compaction replaces conversation history with a summary while preserving context
  • max_iterations parameter prevents infinite loops

Execution Diagram

GitHub URL

Workflow Repository