Principle:Openai Openai agents python Hosted Tool Configuration
Template:Openai Openai agents python Sidebar
Overview
Hosted Tool Configuration describes the theory and design of server-side hosted tools in the OpenAI Agents Python SDK. Unlike function tools that execute locally within the application process, hosted tools (WebSearchTool, FileSearchTool, CodeInterpreterTool, HostedMCPTool) are executed remotely on OpenAI's infrastructure via the Responses API. The developer only configures the tool; all execution happens server-side.
| Property | Value |
|---|---|
| Category | Tool Integration (Server-Side) |
| Source | src/agents/tool.py (lines 277-565)
|
| Import | from agents import WebSearchTool, FileSearchTool, CodeInterpreterTool, HostedMCPTool
|
| Related Implementation | Hosted Tools |
Description
The OpenAI Agents SDK provides two categories of tools: function tools (local execution) and hosted tools (remote execution). Hosted tools are a key architectural distinction because they shift the execution burden entirely to the API provider. The developer's application never runs the tool logic; instead, the Responses API recognizes the tool configuration, executes the operation on its infrastructure, and returns the result as part of the model's response.
This design offers several advantages:
- No local dependencies: Web search, file indexing, and code execution do not require local libraries or sandboxes.
- Reduced latency for chained operations: Since the tool runs co-located with the model, there is no round-trip between client and server for tool results.
- Security isolation: Code execution happens in OpenAI's sandbox, not in the developer's process.
Theoretical Basis
The Hosted Tool Taxonomy
The SDK defines four hosted tool types, each serving a distinct capability:
WebSearchTool
Provides real-time web search. The LLM decides when to search, formulates the query internally, and receives search results that are incorporated into its response. Configuration options include:
user_location: AUserLocationobject with city, region, and country fields for location-aware results.search_context_size: Controls how much surrounding context is retrieved per result ("low","medium", or"high").filters: Domain-level filtering to restrict or prioritize certain websites.
FileSearchTool
Enables retrieval-augmented generation (RAG) over pre-indexed vector stores. The tool queries one or more vector stores and returns relevant document chunks that the model uses to ground its answers.
vector_store_ids: A list of vector store identifiers to search.max_num_results: Limits the number of retrieved chunks.include_search_results: WhenTrue, raw search results are included in the LLM output for downstream inspection.ranking_options: Fine-grained control over retrieval ranking.filters: Attribute-based filters for narrowing the search space.
HostedMCPTool
Connects to a remote Model Context Protocol (MCP) server. MCP is a standardized protocol for tool interoperability. The hosted variant runs entirely server-side: the LLM discovers available tools from the MCP server, calls them, and receives results without any client-side round-trip.
tool_config: AnMcpconfiguration object specifying the server URL, allowed tools, and other MCP settings.on_approval_request: An optional callback for human-in-the-loop approval of MCP tool calls.
For local MCP servers (stdio-based or running in a VPC), the SDK provides a separate Agent(mcp_servers=[...]) mechanism that executes MCP tool calls locally.
CodeInterpreterTool
Provides sandboxed code execution. The LLM can write and execute Python code in a secure container, returning both textual output and generated files.
tool_config: ACodeInterpreterconfiguration object with container and runtime settings.
Configuration-Only Design
A unifying principle of hosted tools is that they are configuration-only dataclasses. Unlike FunctionTool, which requires an on_invoke_tool callable, hosted tools contain no invocation logic. Instead, the SDK's model provider layer recognizes the tool type and translates it into the appropriate Responses API parameter. The API itself handles tool invocation and result injection.
Integration with the Tool Execution Loop
When the model response includes a hosted tool call, the agent run loop does not execute any local code for that tool. The result is already embedded in the model's response. However, certain hosted tools (like HostedMCPTool) may produce approval requests that must be handled locally before the run can continue. This is managed through the on_approval_request callback or through the RunState interruption mechanism.
Usage
from agents import Agent, WebSearchTool, FileSearchTool, CodeInterpreterTool, HostedMCPTool
from openai.types.responses.web_search_tool_param import UserLocation
# Web search agent with location awareness
search_agent = Agent(
name="researcher",
instructions="Search the web to answer questions.",
tools=[
WebSearchTool(
user_location=UserLocation(
city="San Francisco",
region="California",
country="US",
),
search_context_size="high",
)
],
)
# RAG agent over a document store
rag_agent = Agent(
name="rag_assistant",
instructions="Answer questions from the document store.",
tools=[
FileSearchTool(
vector_store_ids=["vs_abc123"],
max_num_results=10,
include_search_results=True,
)
],
)
# Agent with remote MCP tool access
mcp_agent = Agent(
name="integration_agent",
instructions="Use external services to fulfill requests.",
tools=[
HostedMCPTool(
tool_config={
"type": "mcp",
"server_label": "my_server",
"server_url": "https://mcp.example.com",
"allowed_tools": {"tool_names": ["create_issue", "search_docs"]},
}
)
],
)
Related Pages
- Implementation:Openai_Openai_agents_python_Hosted_Tools
- Implementation: Hosted Tools -- the concrete dataclass definitions for each hosted tool
- Principle: Function Tool Definition -- local function tools as an alternative to hosted tools
- Principle: Tool Execution Loop -- how tool calls (both local and hosted) are processed in the run cycle
- Heuristic:Openai_Openai_agents_python_MCP_Server_Lifecycle_Management