Workflow:Microsoft Agent framework Agent With Tool Approval
| Knowledge Sources | |
|---|---|
| Domains | AI_Agents, LLM_Ops, Safety, Python |
| Last Updated | 2026-02-11 17:00 GMT |
Overview
End-to-end process for creating an AI agent with function tools that require explicit human approval before execution, implementing a human-in-the-loop safety gate.
Description
This workflow demonstrates how to build agents where sensitive tool invocations require human approval before execution. The framework provides a built-in approval mechanism that pauses agent execution when a tool marked with approval_mode="always_require" is called, surfaces the pending tool call to the application, collects the user's approval or rejection decision, and resumes execution with the decision applied. This pattern is critical for production deployments where agents interact with external systems (databases, APIs, financial transactions) and actions must be vetted by a human before proceeding.
Usage
Execute this workflow when building agents that call tools with side effects requiring human oversight, such as modifying databases, sending emails, executing financial transactions, or invoking external APIs. This is the recommended default for production-grade agents where uncontrolled tool execution poses risk. You should have a user interface or command-line loop capable of presenting approval requests and collecting responses.
Execution Steps
Step 1: Define tools with approval requirements
Create Python functions decorated with @tool and set the approval_mode parameter. Tools that perform sensitive operations should use approval_mode="always_require", while safe read-only tools can use approval_mode="never_require". The approval mode determines whether the framework pauses execution to request human consent before invoking the tool.
Key considerations:
- "always_require" pauses execution and surfaces a UserInputRequest to the application
- "never_require" executes immediately without human intervention
- The default recommendation for production is "always_require" unless tool behavior is well understood
- Tool parameter annotations and docstrings are still sent to the LLM for reasoning
Step 2: Create the agent with approval-required tools
Instantiate the Agent with the chat client and the tools that have approval requirements configured. Use the async context manager pattern (async with Agent(...) as agent:) to ensure proper resource lifecycle management, especially when working with assistant-based clients that create server-side resources.
Key considerations:
- The context manager pattern handles cleanup of server-side resources
- Both agent-level and run-level tools can have approval requirements
- Middleware can be combined with approval gates for layered security
Step 3: Run the agent and detect approval requests
Execute agent.run() with the user query. When the LLM decides to call an approval-required tool, the framework pauses and returns an AgentResponse containing user_input_requests. Each request describes the pending tool call (function name, arguments) and provides a request identifier for responding.
Key considerations:
- Check result.user_input_requests to detect pending approvals
- Each request contains the tool name, arguments, and a unique request_id
- In streaming mode, collect requests from chunk.user_input_requests during iteration
- The agent does not proceed until all pending requests are resolved
Step 4: Present the approval request and collect user decision
Extract the pending tool call details from the UserInputRequest and present them to the user through your application's interface. The user reviews the tool name and arguments and decides to approve or reject the invocation.
Key considerations:
- Display the function name and arguments clearly to the user
- Allow both approval (True) and rejection (False) responses
- Use to_function_approval_response(bool) to create the properly formatted response
- Multiple tools may require approval in a single turn
Step 5: Resume execution with the approval decision
Build a new message sequence containing the original query, the approval request, and the user's response, then call agent.run() again. The framework processes the approval, either executing the approved tool or skipping the rejected one, and continues the agent's reasoning loop until completion.
Key considerations:
- Accumulate messages: original query + approval request messages + approval response
- The agent may request additional approvals in subsequent turns (loop pattern)
- Continue the approval loop until user_input_requests is empty
- Both streaming and non-streaming resumption are supported