Implementation:Openai Openai agents python Execute Tools And Side Effects
Template:Openai Openai agents python Sidebar
Overview
execute_tools_and_side_effects is the internal orchestration function in the OpenAI Agents Python SDK that handles all tool execution, approval processing, guardrail evaluation, handoff resolution, and final output detection within a single turn of the agent run loop. It is the central dispatch point that takes a parsed model response and determines the next step for the run.
| Property | Value |
|---|---|
| Source | src/agents/run_internal/turn_resolution.py (lines 481-640)
|
| Import | Internal -- not directly imported by users |
| Inputs | ProcessedResponse (parsed model output with tool calls), Agent, RunContextWrapper, RunConfig, RunHooks
|
| Outputs | SingleStepResult containing NextStepRunAgain, NextStepFinalOutput, NextStepHandoff, or NextStepInterruption
|
| Related Principle | Tool Execution Loop |
Code Reference
Function Signature
async def execute_tools_and_side_effects(
*,
agent: Agent[TContext],
original_input: str | list[TResponseInputItem],
pre_step_items: list[RunItem],
new_response: ModelResponse,
processed_response: ProcessedResponse,
output_schema: AgentOutputSchemaBase | None,
hooks: RunHooks[TContext],
context_wrapper: RunContextWrapper[TContext],
run_config: RunConfig,
) -> SingleStepResult:
Source Location
src/agents/run_internal/turn_resolution.py, lines 481-640. This function is called by get_single_step_result_from_response after the model response has been parsed into a ProcessedResponse.
Calling Context
The function is invoked from run_single_turn in src/agents/run_internal/run_loop.py (lines 1312-1399), which orchestrates a single non-streaming turn:
async def run_single_turn(
*,
agent: Agent[TContext],
all_tools: list[Tool],
original_input: str | list[TResponseInputItem],
generated_items: list[RunItem],
hooks: RunHooks[TContext],
context_wrapper: RunContextWrapper[TContext],
run_config: RunConfig,
should_run_agent_start_hooks: bool,
tool_use_tracker: AgentToolUseTracker,
server_conversation_tracker: OpenAIServerConversationTracker | None = None,
session: Session | None = None,
session_items_to_rewind: list[TResponseInputItem] | None = None,
) -> SingleStepResult:
I/O Contract
Inputs
| Parameter | Type | Description |
|---|---|---|
agent |
Agent[TContext] |
The agent whose tools are being executed. |
original_input |
list[TResponseInputItem] | The user's original input for this run. |
pre_step_items |
list[RunItem] |
Items generated in previous turns (conversation history). |
new_response |
ModelResponse |
The raw model response from the current turn. |
processed_response |
ProcessedResponse |
The parsed model output containing categorized tool calls, handoffs, message items, and new items. |
output_schema |
None | The expected output schema (for structured output agents) or None for plain text.
|
hooks |
RunHooks[TContext] |
Global lifecycle hooks for the run. |
context_wrapper |
RunContextWrapper[TContext] |
The run context providing access to user state. |
run_config |
RunConfig |
Configuration for the current run (model settings, tracing, etc.). |
Outputs
Returns a SingleStepResult containing:
| Field | Type | Description |
|---|---|---|
original_input |
list | Pass-through of the original input. |
model_response |
ModelResponse |
The model response for this turn. |
pre_step_items |
list[RunItem] |
Items from before this step. |
new_step_items |
list[RunItem] |
New items generated during this step (tool results, messages, etc.). |
next_step |
NextStep |
One of: NextStepRunAgain, NextStepFinalOutput, NextStepHandoff, or NextStepInterruption.
|
tool_input_guardrail_results |
list |
Results from input guardrail evaluation. |
tool_output_guardrail_results |
list |
Results from output guardrail evaluation. |
Description
The function operates through the following sequence of steps:
Step 1: Build Execution Plan
Calls _build_plan_for_fresh_turn to analyze the ProcessedResponse and determine which tools need to be executed, which are awaiting approval, and which MCP requests need processing. The plan categorizes each tool call into the appropriate execution track.
Step 2: Deduplicate Tool Call Items
Calls _dedupe_tool_call_items to ensure that tool call items from the processed response are not duplicated with items already present in pre_step_items. This prevents double-counting when a run is resumed after an interruption.
Step 3: Execute Tool Plan
Calls _execute_tool_plan, which dispatches all tool executions concurrently. This returns separate result sets for:
- Function tool results: Outputs from
FunctionToolinvocations. - Tool input guardrail results: Pass/fail for input guardrails.
- Tool output guardrail results: Pass/fail for output guardrails.
- Computer tool results: Outputs from
ComputerToolexecutions. - Shell tool results: Outputs from shell tool calls.
- Apply patch results: Outputs from apply-patch operations.
- Local shell results: Outputs from local shell tool calls.
Step 4: Build Result Items
Calls _build_tool_result_items to convert raw tool outputs into RunItem objects that can be appended to the conversation history.
Step 5: Collect Interruptions
Calls _collect_tool_interruptions to gather any approval-pending items from function, shell, and apply-patch results. If pending interruptions exist (from MCP or other approval flows), they are also collected.
Step 6: Handle Interruptions
If any interruptions were collected, the function returns immediately with NextStepInterruption. The run is paused until the developer approves or rejects the pending tool calls via RunState.
Step 7: Process MCP Callbacks
If there are MCP requests with callbacks, _append_mcp_callback_results is called to process them and append results to the step items.
Step 8: Handle Handoffs
If the processed response contains handoff requests, execute_handoffs is called to transfer control to the target agent, returning NextStepHandoff.
Step 9: Check for Tool-Based Final Output
Calls _maybe_finalize_from_tool_results to check if the agent's tool_use_behavior setting indicates that a tool result should be treated as the final output (e.g., "stop_on_first_tool").
Step 10: Resolve Final Output or Continue
If no tool calls remain to process:
- If an output schema exists and the model produced structured text, validate it and return
NextStepFinalOutput. - If the output is plain text, return
NextStepFinalOutputwith the text.
If tool calls were executed and more processing is needed, return NextStepRunAgain to trigger another model invocation.
Examples
The execute_tools_and_side_effects function is internal and not called directly. The tool execution loop is triggered automatically through Runner.run():
from agents import Agent, Runner, function_tool
@function_tool
def calculate(expression: str) -> str:
"""Evaluate a math expression."""
return str(eval(expression))
agent = Agent(
name="calculator",
instructions="Use the calculate tool to solve math problems.",
tools=[calculate],
)
# Internally, Runner.run executes this loop:
# 1. run_single_turn calls get_new_response -> model returns tool call
# 2. get_single_step_result_from_response calls execute_tools_and_side_effects
# 3. execute_tools_and_side_effects runs calculate("2 + 2") -> "4"
# 4. Returns SingleStepResult with NextStepRunAgain
# 5. run_single_turn is called again with tool result in conversation
# 6. Model produces final text -> NextStepFinalOutput
result = await Runner.run(agent, "What is 2 + 2?")
print(result.final_output) # "2 + 2 equals 4"
Observing the Loop with Streaming
from agents import Agent, Runner, function_tool
@function_tool
def search_database(query: str) -> str:
"""Search the internal database."""
return f"Found 3 results for '{query}'"
agent = Agent(
name="search_agent",
instructions="Search for information and summarize results.",
tools=[search_database],
)
# Streaming reveals each step of the tool loop
async with Runner.run_streamed(agent, "Find reports about Q4 revenue") as stream:
async for event in stream.stream_events():
if event.type == "tool_called":
print(f"Tool called: {event.data.tool_name}")
elif event.type == "tool_output":
print(f"Tool result: {event.data.output}")
elif event.type == "agent_updated_stream_event":
print(f"Agent update received")
Multi-Tool Parallel Execution
from agents import Agent, Runner, function_tool
@function_tool
def get_stock_price(symbol: str) -> str:
"""Get the current stock price."""
prices = {"AAPL": "182.52", "GOOGL": "141.80", "MSFT": "378.91"}
return prices.get(symbol, "unknown")
@function_tool
def get_company_info(symbol: str) -> str:
"""Get company information."""
info = {"AAPL": "Apple Inc.", "GOOGL": "Alphabet Inc.", "MSFT": "Microsoft Corp."}
return info.get(symbol, "unknown")
agent = Agent(
name="stock_analyst",
instructions="Provide stock analysis using available tools.",
tools=[get_stock_price, get_company_info],
)
# The model may call both tools in parallel in a single turn.
# execute_tools_and_side_effects dispatches both concurrently.
result = await Runner.run(agent, "Tell me about AAPL stock")
print(result.final_output)
Related Pages
- Principle: Tool Execution Loop -- the theoretical basis for the turn-based tool execution model
- Implementation: Function Tool Decorator -- how function tools are constructed before being executed
- Implementation: Hosted Tools -- server-side tools that bypass local execution
- Environment:Openai_Openai_agents_python_Python_3_9_Runtime
- Heuristic:Openai_Openai_agents_python_Tool_Choice_Reset_Prevents_Loops