Implementation:Openai Openai agents python Agent Constructor
Appearance
| Property | Value |
|---|---|
| Implementation Name | Agent Constructor |
| SDK | OpenAI Agents Python |
| Repository | openai-agents-python |
| Source File | src/agents/agent.py
|
| Line Range | L217-296 |
| Import | from agents import Agent
|
| Type | Dataclass constructor |
Overview
The Agent Constructor is the primary entry point for creating agent instances in the OpenAI Agents Python SDK. The Agent class is a Python dataclass decorated with @dataclass that inherits from AgentBase and is generic over TContext. It exposes all configuration fields as constructor parameters with sensible defaults, enabling both minimal and fully-customized agent definitions.
Code Reference
Source Location
| Property | Value |
|---|---|
| File | src/agents/agent.py
|
| Class | Agent
|
| Base Classes | AgentBase, Generic[TContext]
|
| Lines | 217-296 |
Signature
@dataclass
class Agent(AgentBase, Generic[TContext]):
instructions: (
str
| Callable[
[RunContextWrapper[TContext], Agent[TContext]],
MaybeAwaitable[str],
]
| None
) = None
prompt: Prompt | DynamicPromptFunction | None = None
handoffs: list[Agent[Any] | Handoff[TContext, Any]] = field(default_factory=list)
model: str | Model | None = None
model_settings: ModelSettings = field(default_factory=get_default_model_settings)
input_guardrails: list[InputGuardrail[TContext]] = field(default_factory=list)
output_guardrails: list[OutputGuardrail[TContext]] = field(default_factory=list)
output_type: type[Any] | AgentOutputSchemaBase | None = None
hooks: AgentHooks[TContext] | None = None
tool_use_behavior: (
Literal["run_llm_again", "stop_on_first_tool"]
| StopAtTools
| ToolsToFinalOutputFunction
) = "run_llm_again"
Import Statement
from agents import Agent
I/O Contract
Inputs
| Parameter | Type | Default | Description |
|---|---|---|---|
name |
str |
(required, from AgentBase) | The name of the agent, used for identification and tracing. |
instructions |
Callable | None | None |
System prompt for the agent. Can be a static string or a callable that receives context and agent, returning a string. |
prompt |
DynamicPromptFunction | None | None |
A prompt object for dynamically configuring instructions, tools, and other config. Only usable with OpenAI models via the Responses API. |
handoffs |
Handoff] | [] |
Sub-agents that this agent can delegate to during execution. |
model |
Model | None | None |
The model to use. Defaults to the SDK default model (currently "gpt-4.1").
|
model_settings |
ModelSettings |
get_default_model_settings() |
Model tuning parameters such as temperature, top_p, max_tokens. |
input_guardrails |
list[InputGuardrail[TContext]] |
[] |
Checks that run in parallel before the agent generates a response. Only runs for the first agent in the chain. |
output_guardrails |
list[OutputGuardrail[TContext]] |
[] |
Checks that run on the final output after the agent generates a response. |
output_type |
AgentOutputSchemaBase | None | None |
The structured output type. If None, the output is a plain str.
|
hooks |
None | None |
Lifecycle callbacks for agent events (start, end, tool call, handoff). |
tool_use_behavior |
StopAtTools | ToolsToFinalOutputFunction | "run_llm_again" |
Controls how tool results are handled: re-invoke the LLM, stop on first tool, stop at specific tools, or use a custom function. |
Output
| Type | Description |
|---|---|
Agent[TContext] |
A configured agent instance ready for execution via Runner.run().
|
Usage Examples
Minimal Agent
from agents import Agent
agent = Agent(
name="assistant",
instructions="You are a helpful assistant that answers questions concisely.",
model="gpt-4.1",
)
Agent with Dynamic Instructions
from agents import Agent, RunContextWrapper
def get_instructions(ctx: RunContextWrapper, agent: Agent) -> str:
user_name = ctx.context.get("user_name", "User")
return f"You are a personal assistant for {user_name}. Be concise and helpful."
agent = Agent(
name="personal_assistant",
instructions=get_instructions,
)
Agent with Tools and Guardrails
from agents import Agent, InputGuardrail, OutputGuardrail
agent = Agent(
name="research_agent",
instructions="You help with research questions using the provided tools.",
model="gpt-4.1",
tools=[web_search_tool, calculator_tool],
input_guardrails=[topic_filter_guardrail],
output_guardrails=[factuality_guardrail],
output_type=ResearchReport,
)
Agent with Structured Output
from pydantic import BaseModel
from agents import Agent
class WeatherReport(BaseModel):
city: str
temperature: float
conditions: str
agent = Agent(
name="weather_agent",
instructions="Extract weather information from the user query.",
output_type=WeatherReport,
)
Related Pages
- Openai_Openai_agents_python_Agent_Definition
- Openai_Openai_agents_python_Runner_Run
- Openai_Openai_agents_python_RunResult
- Environment:Openai_Openai_agents_python_Python_3_9_Runtime
- Environment:Openai_Openai_agents_python_OpenAI_API_Credentials
- Heuristic:Openai_Openai_agents_python_Tool_Choice_Reset_Prevents_Loops
- Heuristic:Openai_Openai_agents_python_GPT_5_Reasoning_Settings
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment