Implementation:Openai Openai agents python Hosted Tools
Template:Openai Openai agents python Sidebar
Overview
Hosted Tools are the server-side tool dataclass definitions in the OpenAI Agents Python SDK. Each hosted tool is a configuration-only @dataclass that instructs the Responses API to execute a specific capability (web search, file search, code interpretation, or remote MCP tool access) on OpenAI's infrastructure. The developer configures the tool; the API handles all execution.
| Property | Value |
|---|---|
| Source | src/agents/tool.py (lines 277-565)
|
| Import | from agents import WebSearchTool, FileSearchTool, CodeInterpreterTool, HostedMCPTool
|
| Inputs | Configuration parameters (vector store IDs, location, MCP server URL, etc.) |
| Outputs | Dataclass instances passed to Agent(tools=[...])
|
| Related Principle | Hosted Tool Configuration |
Code Reference
WebSearchTool
@dataclass
class WebSearchTool:
"""A hosted tool that lets the LLM search the web. Currently only supported
with OpenAI models, using the Responses API."""
user_location: UserLocation | None = None
"""Optional location for the search. Lets you customize results to be
relevant to a location."""
filters: WebSearchToolFilters | None = None
"""A filter to apply based on file attributes."""
search_context_size: Literal["low", "medium", "high"] = "medium"
"""The amount of context to use for the search."""
@property
def name(self):
return "web_search"
FileSearchTool
@dataclass
class FileSearchTool:
"""A hosted tool that lets the LLM search through a vector store. Currently
only supported with OpenAI models, using the Responses API."""
vector_store_ids: list[str]
"""The IDs of the vector stores to search."""
max_num_results: int | None = None
"""The maximum number of results to return."""
include_search_results: bool = False
"""Whether to include the search results in the output produced by the LLM."""
ranking_options: RankingOptions | None = None
"""Ranking options for search."""
filters: Filters | None = None
"""A filter to apply based on file attributes."""
@property
def name(self):
return "file_search"
HostedMCPTool
@dataclass
class HostedMCPTool:
"""A tool that allows the LLM to use a remote MCP server. The LLM will
automatically list and call tools, without requiring a round trip back
to your code."""
tool_config: Mcp
"""The MCP tool config, which includes the server URL and other settings."""
on_approval_request: MCPToolApprovalFunction | None = None
"""An optional function that will be called if approval is requested for
an MCP tool."""
@property
def name(self):
return "hosted_mcp"
CodeInterpreterTool
@dataclass
class CodeInterpreterTool:
"""A tool that allows the LLM to execute code in a sandboxed environment."""
tool_config: CodeInterpreter
"""The tool config, which includes the container and other settings."""
@property
def name(self):
return "code_interpreter"
I/O Contract
WebSearchTool
| Parameter | Type | Default | Description |
|---|---|---|---|
user_location |
None | None |
Location hint for search result relevance (city, region, country). |
filters |
None | None |
Domain-level filters for restricting search results. |
search_context_size |
Literal["low", "medium", "high"] |
"medium" |
Controls how much context is fetched per search result. |
FileSearchTool
| Parameter | Type | Default | Description |
|---|---|---|---|
vector_store_ids |
list[str] |
(required) | IDs of vector stores to search. |
max_num_results |
None | None |
Maximum number of document chunks to retrieve. |
include_search_results |
bool |
False |
Whether raw search results appear in LLM output. |
ranking_options |
None | None |
Fine-grained ranking configuration. |
filters |
None | None |
Attribute-based filters on the vector store. |
HostedMCPTool
| Parameter | Type | Default | Description |
|---|---|---|---|
tool_config |
Mcp |
(required) | MCP server configuration (URL, allowed tools, etc.). |
on_approval_request |
None | None |
Callback for approving or rejecting MCP tool calls. |
CodeInterpreterTool
| Parameter | Type | Default | Description |
|---|---|---|---|
tool_config |
CodeInterpreter |
(required) | Code interpreter container and runtime configuration. |
Description
Each hosted tool is a simple dataclass with no invocation logic. The SDK's model integration layer (specifically the OpenAI Responses API provider) inspects the agent's tool list, recognizes hosted tool instances by their type, and serializes them into the appropriate API parameters. The key aspects of the implementation are:
Name Property
Each hosted tool exposes a read-only name property that returns a fixed string identifier ("web_search", "file_search", "hosted_mcp", "code_interpreter"). This name is used internally by the SDK to distinguish hosted tools from function tools during tool list assembly.
No Local Execution
Unlike FunctionTool, which carries an on_invoke_tool callable, hosted tools have no execution method. When the model response includes a hosted tool call, the result is already present in the response. The tool execution loop recognizes these as server-handled and does not attempt local invocation.
Approval Flow for HostedMCPTool
The HostedMCPTool is unique among hosted tools in that it supports an approval workflow. When the MCP server requests approval for a tool call, the on_approval_request callback is invoked with an MCPToolApprovalRequest containing the run context and the approval data. The callback returns an MCPToolApprovalFunctionResult indicating whether to approve or reject (with an optional reason). If no callback is provided, the run is interrupted and the developer must manually approve or reject via RunState.
Examples
Web Search with Location
from agents import Agent, WebSearchTool
from openai.types.responses.web_search_tool_param import UserLocation
agent = Agent(
name="researcher",
instructions="Search the web to answer questions.",
tools=[
WebSearchTool(
user_location=UserLocation(
city="San Francisco",
region="California",
country="US",
),
search_context_size="high",
)
],
)
File Search (RAG)
from agents import Agent, FileSearchTool
rag_agent = Agent(
name="rag_assistant",
instructions="Answer from the document store.",
tools=[
FileSearchTool(
vector_store_ids=["vs_abc123"],
max_num_results=5,
include_search_results=True,
)
],
)
Code Interpreter
from agents import Agent, CodeInterpreterTool
coding_agent = Agent(
name="coder",
instructions="Write and execute Python code to solve problems.",
tools=[
CodeInterpreterTool(
tool_config={"type": "code_interpreter", "container": {"type": "auto"}}
)
],
)
Hosted MCP with Approval
from agents import Agent, HostedMCPTool
async def handle_approval(request):
"""Auto-approve all tool calls from the MCP server."""
return {"approve": True}
mcp_agent = Agent(
name="integration_agent",
instructions="Use external services.",
tools=[
HostedMCPTool(
tool_config={
"type": "mcp",
"server_label": "my_server",
"server_url": "https://mcp.example.com",
"allowed_tools": {"tool_names": ["create_issue"]},
},
on_approval_request=handle_approval,
)
],
)
Related Pages
- Principle: Hosted Tool Configuration -- the theoretical basis for hosted tools
- Implementation: Function Tool Decorator -- local function tools as an alternative
- Implementation: Execute Tools and Side Effects -- how tool results are processed in the run loop
- Environment:Openai_Openai_agents_python_MCP_Dependencies
- Heuristic:Openai_Openai_agents_python_MCP_Server_Lifecycle_Management