Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Openai Openai agents python Function Tool Decorator

From Leeroopedia


Template:Openai Openai agents python Sidebar

Overview

The Function Tool Decorator (function_tool) is the primary mechanism in the OpenAI Agents Python SDK for converting a plain Python function into a FunctionTool dataclass that an LLM agent can invoke. It handles JSON schema generation, argument parsing, context injection, error handling, and approval workflows automatically.

Property Value
Source src/agents/tool.py (lines 811-989)
Import from agents import function_tool
Inputs A Python function (sync or async) with typed parameters
Outputs FunctionTool dataclass instance
Related Principle Function Tool Definition

Code Reference

Full Signature

def function_tool(
    func: ToolFunction[...] | None = None,
    *,
    name_override: str | None = None,
    description_override: str | None = None,
    docstring_style: DocstringStyle | None = None,
    use_docstring_info: bool = True,
    failure_error_function: ToolErrorFunction | None | object = _UNSET_FAILURE_ERROR_FUNCTION,
    strict_mode: bool = True,
    is_enabled: bool | Callable[[RunContextWrapper[Any], AgentBase], MaybeAwaitable[bool]] = True,
    needs_approval: bool | Callable[[RunContextWrapper[Any], dict[str, Any], str], Awaitable[bool]] = False,
    tool_input_guardrails: list[ToolInputGuardrail[Any]] | None = None,
    tool_output_guardrails: list[ToolOutputGuardrail[Any]] | None = None,
) -> FunctionTool | Callable[[ToolFunction[...]], FunctionTool]:

Source Location

src/agents/tool.py, lines 811-989. The decorator is defined as a top-level function that supports both @function_tool (no parentheses) and @function_tool(...) (with keyword arguments) usage patterns.

I/O Contract

Inputs

Parameter Type Default Description
func None None The Python function to wrap. When using @function_tool without parentheses, this is populated automatically.
name_override None None Custom name for the tool. Defaults to the function's __name__.
description_override None None Custom description. Defaults to the function's docstring.
docstring_style None None Force a docstring parsing style (Google, Sphinx, Numpy). Auto-detected if None.
use_docstring_info bool True Whether to extract description and argument docs from the docstring.
failure_error_function None default handler Called when the tool raises an exception. Returns an error string to the LLM. Set to None to propagate exceptions.
strict_mode bool True Enable strict JSON schema mode for the tool's parameters.
is_enabled Callable True Controls whether the tool is visible to the LLM at runtime.
needs_approval Callable False Whether execution requires human approval before proceeding.
tool_input_guardrails None None Guardrails to validate input before tool invocation.
tool_output_guardrails None None Guardrails to validate output after tool invocation.

Outputs

The decorator returns a FunctionTool dataclass with the following key fields:

Field Type Description
name str The tool name exposed to the LLM.
description str The tool description shown to the LLM.
params_json_schema dict[str, Any] The JSON schema for the tool's parameters.
on_invoke_tool Callable[[ToolContext, str], Awaitable[Any]] The async function that executes the tool given a JSON argument string.
strict_json_schema bool Whether strict schema mode is enabled.
is_enabled Callable Runtime enablement control.
needs_approval Callable Whether approval is required before execution.

Description

The function_tool decorator works through the following internal steps:

  1. Schema generation: Calls function_schema() to inspect the function's signature, type annotations, and docstring. This produces a FunctionSchema containing the JSON schema, a synthesized Pydantic model, and metadata about whether the function expects a context argument.
  2. Invocation wrapper creation: Builds an async _on_invoke_tool closure that:
    • Parses the JSON input string from the LLM into a dictionary.
    • Validates the dictionary against the Pydantic model.
    • Converts validated data into positional and keyword arguments via schema.to_call_args().
    • Calls the original function (using await for async functions, or asyncio.to_thread for sync functions to avoid blocking the event loop).
    • Injects ToolContext as the first argument if the function's signature expects it.
  3. Error handling: Wraps invocation in a try/except that delegates to failure_error_function. The default handler returns a string error message to the LLM and attaches tracing data to the current span.
  4. FunctionTool construction: Assembles and returns a FunctionTool dataclass with all the computed fields.

The decorator supports a dual calling convention. When used as @function_tool (without parentheses), the func parameter receives the decorated function directly and the tool is returned immediately. When used as @function_tool(...) (with keyword arguments), it returns an intermediate decorator function.

Examples

Basic Usage

from agents import function_tool

@function_tool
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    return f"The weather in {city} is sunny."

With Configuration Options

from agents import function_tool

@function_tool(name_override="lookup_weather", strict_mode=True)
def get_weather(city: str, units: str = "celsius") -> str:
    """Get weather information.

    Args:
        city: The city name
        units: Temperature units (celsius or fahrenheit)
    """
    return f"Weather in {city}: 22 degrees {units}"

Async Function with Context

from agents import function_tool, RunContextWrapper

@function_tool
async def lookup_user(ctx: RunContextWrapper, user_id: int) -> str:
    """Look up a user by their ID.

    Args:
        user_id: The unique user identifier
    """
    db = ctx.context.database
    user = await db.get_user(user_id)
    return f"User: {user.name}, Email: {user.email}"

With Approval and Guardrails

from agents import function_tool

@function_tool(
    needs_approval=True,
    failure_error_function=None,  # Raise exceptions instead of sending to LLM
)
def delete_record(record_id: str) -> str:
    """Permanently delete a record.

    Args:
        record_id: The ID of the record to delete
    """
    # This will only execute after human approval
    perform_deletion(record_id)
    return f"Record {record_id} deleted."

Wiring Into an Agent

from agents import Agent, Runner, function_tool

@function_tool
def calculate(expression: str) -> str:
    """Evaluate a math expression."""
    return str(eval(expression))

agent = Agent(
    name="calculator",
    instructions="Use the calculate tool to solve math problems.",
    tools=[calculate],
)

result = await Runner.run(agent, "What is 2 + 2?")
print(result.final_output)  # "2 + 2 equals 4"

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment