Implementation:Openai Openai python Response Model
Appearance
| Knowledge Sources | |
|---|---|
| Domains | NLP, Text_Generation |
| Last Updated | 2026-02-15 00:00 GMT |
Overview
Concrete typed response model and streaming manager for extracting Responses API outputs provided by the OpenAI Python SDK.
Description
The Response Pydantic model represents a complete Responses API result with output items, status, usage, and metadata. The ResponseStreamManager context manager provides high-level streaming with .text_stream for text deltas and event iteration for fine-grained control over 30+ event types.
Usage
Access .output_text for simple text extraction. Iterate .output for all output items. Use ResponseStreamManager as a context manager for streaming responses.
Code Reference
Source Location
- Repository: openai-python
- File: src/openai/types/responses/response.py
- Lines: L1-320
- File: src/openai/lib/streaming/responses.py (ResponseStreamManager)
Signature
class Response(BaseModel):
id: str
created_at: float
model: str
object: Literal["response"]
output: List[ResponseOutputItem]
output_text: str # Convenience property for first text output
status: Literal["completed", "failed", "in_progress", "incomplete"]
usage: ResponseUsage
error: Optional[ResponseError] = None
incomplete_details: Optional[IncompleteDetails] = None
instructions: Optional[str] = None
metadata: Optional[Dict[str, str]] = None
previous_response_id: Optional[str] = None
temperature: Optional[float] = None
top_p: Optional[float] = None
tool_choice: Optional[str] = None
tools: Optional[List[Tool]] = None
class ResponseStreamManager(Generic[TextFormatT]):
"""High-level streaming context manager."""
@property
def text_stream(self) -> Iterator[str]: ...
def __enter__(self) -> ResponseStream[TextFormatT]: ...
def __exit__(self, ...) -> None: ...
def get_final_response(self) -> Response: ...
def get_final_text(self) -> str: ...
Import
from openai.types.responses import Response
# ResponseStreamManager returned by client.responses.stream()
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| response | Response or ResponseStreamManager | Yes | Raw API response from create()/stream() |
Outputs
| Name | Type | Description |
|---|---|---|
| output_text | str | First text output (convenience accessor) |
| output | list[ResponseOutputItem] | All output items (text, tool calls, etc.) |
| status | str | Response status (completed/failed/in_progress/incomplete) |
| usage | ResponseUsage | Token usage (input_tokens, output_tokens, total_tokens) |
| id | str | Response ID for retrieval/chaining |
Usage Examples
Extract Text
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
model="gpt-4o",
input="What is machine learning?",
)
print(response.output_text)
print(f"Tokens: {response.usage.total_tokens}")
Iterate Output Items
response = client.responses.create(
model="gpt-4o",
input="Analyze this data.",
tools=[{"type": "web_search"}],
)
for item in response.output:
if item.type == "message":
for content in item.content:
if content.type == "output_text":
print(content.text)
elif item.type == "web_search_call":
print(f"Searched: {item.id}")
Streaming with Text Stream
with client.responses.stream(
model="gpt-4o",
input="Tell me a story.",
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
final = stream.get_final_response()
print(f"\nTotal tokens: {final.usage.total_tokens}")
Related Pages
Implements Principle
Requires Environment
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment