Principle:Openai Openai python Response Creation
| Knowledge Sources | |
|---|---|
| Domains | NLP, Text_Generation |
| Last Updated | 2026-02-15 00:00 GMT |
Overview
A modern API invocation pattern that generates model responses with support for multi-turn chaining, background execution, structured outputs, and tool integration.
Description
The Responses API is OpenAI's newer generation endpoint, designed as a successor to Chat Completions with additional capabilities. It supports simple string inputs, structured input items, multi-turn conversation chaining via previous_response_id, background execution for long-running tasks, and built-in tool types (web search, file search, computer use) alongside custom function tools.
Unlike Chat Completions, the Responses API returns a stateful Response object that can be retrieved later and chained into follow-up requests. It also supports server-side conversation state management.
Usage
Use this principle for new applications that need the latest OpenAI features including background responses, built-in tools, and conversation chaining. Prefer the Responses API over Chat Completions for greenfield projects unless you need backward compatibility.
Theoretical Basis
The Responses API follows a Stateful Request-Response pattern:
# Simple request
response = create_response(input="text", model=model)
# Multi-turn chaining (server manages state)
response2 = create_response(
input="follow-up",
model=model,
previous_response_id=response.id
)
# Background execution
response = create_response(input="complex task", background=True)
# Poll later: retrieve(response.id)
# Three output modes:
# 1. Standard: Response object
# 2. Streaming: Stream[ResponseStreamEvent]
# 3. Structured: ParsedResponse[Schema]