Principle:Openai Openai python Response Processing
| Knowledge Sources | |
|---|---|
| Domains | NLP, Text_Generation |
| Last Updated | 2026-02-15 00:00 GMT |
Overview
A data extraction pattern for consuming Responses API outputs including text, tool calls, and streaming events from a stateful response object.
Description
Response processing for the Responses API handles extraction from the Response model, which contains a list of output items (text blocks, tool calls, reasoning). The convenience accessor .output_text returns the first text output directly. For streaming, a ResponseStreamManager provides a .text_stream iterator and per-event iteration over 30+ typed event types.
Unlike Chat Completions, Responses are stateful objects with an ID, status, and the ability to be retrieved later. The status field indicates whether the response is complete, in progress, or failed.
Usage
Use this principle after every Responses API call to extract generated content. Use .output_text for quick text extraction. Iterate over .output for access to individual output items including tool calls.
Theoretical Basis
Response processing follows two patterns:
Non-streaming:
# Direct access on Response object
text = response.output_text # Convenience: first text output
items = response.output # All output items
status = response.status # completed/failed/in_progress/incomplete
usage = response.usage # Token usage stats
response_id = response.id # For retrieval/chaining
Streaming:
# High-level text stream
with stream_manager as stream:
for text_delta in stream.text_stream:
process(text_delta)
# Low-level event iteration
for event in stream:
if event.type == "response.output_text.delta":
process(event.delta)