Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Openai Openai python Chat Completion Response

From Leeroopedia
Knowledge Sources
Domains NLP, Text_Generation
Last Updated 2026-02-15 00:00 GMT

Overview

Concrete typed response models for extracting generated content from Chat Completions provided by the OpenAI Python SDK.

Description

The ChatCompletion Pydantic model represents a complete API response containing generated choices, usage statistics, and metadata. The Stream class wraps Server-Sent Events into a typed iterator yielding ChatCompletionChunk objects with incremental deltas. Both models provide typed attribute access for reliable content extraction.

Usage

Use ChatCompletion for non-streaming responses. Iterate over Stream[ChatCompletionChunk] for streaming responses. Access .choices[0].message for content and tool calls.

Code Reference

Source Location

  • Repository: openai-python
  • File: src/openai/types/chat/chat_completion.py
  • Lines: L1-80
  • File: src/openai/_streaming.py
  • Lines: L22-122 (Stream), L124-224 (AsyncStream)

Signature

class ChatCompletion(BaseModel):
    id: str
    choices: List[Choice]
    created: int
    model: str
    object: Literal["chat.completion"]
    usage: Optional[CompletionUsage] = None
    system_fingerprint: Optional[str] = None
    service_tier: Optional[str] = None

class Choice(BaseModel):
    finish_reason: Literal["stop", "length", "tool_calls", "content_filter"]
    index: int
    message: ChatCompletionMessage
    logprobs: Optional[ChoiceLogprobs] = None

class ChatCompletionMessage(BaseModel):
    content: Optional[str] = None
    role: Literal["assistant"]
    tool_calls: Optional[List[ChatCompletionMessageToolCall]] = None
    refusal: Optional[str] = None
class Stream(Generic[_T]):
    """SSE stream iterator for ChatCompletionChunk objects."""
    def __iter__(self) -> Iterator[_T]: ...
    def __next__(self) -> _T: ...
    def close(self) -> None: ...
    def __enter__(self) -> Self: ...
    def __exit__(self, ...) -> None: ...
    def response(self) -> httpx.Response: ...

class AsyncStream(Generic[_T]):
    """Async SSE stream iterator."""
    async def __aiter__(self) -> AsyncIterator[_T]: ...
    async def __anext__(self) -> _T: ...
    async def close(self) -> None: ...

Import

from openai.types.chat import ChatCompletion, ChatCompletionChunk
from openai import Stream, AsyncStream

I/O Contract

Inputs

Name Type Required Description
response ChatCompletion or Stream[ChatCompletionChunk] Yes Raw API response from create()

Outputs

Name Type Description
content str or None Generated text from choices[0].message.content
tool_calls list[ChatCompletionMessageToolCall] or None Tool call requests from the model
finish_reason str Why generation stopped (stop/length/tool_calls/content_filter)
usage CompletionUsage Token usage (prompt_tokens, completion_tokens, total_tokens)

Usage Examples

Non-Streaming Response

from openai import OpenAI

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}],
)

# Extract content
text = response.choices[0].message.content
print(text)

# Check finish reason
if response.choices[0].finish_reason == "length":
    print("Response was truncated")

# Token usage
print(f"Tokens used: {response.usage.total_tokens}")

Streaming Response

stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Tell me a story."}],
    stream=True,
)

for chunk in stream:
    delta = chunk.choices[0].delta
    if delta.content is not None:
        print(delta.content, end="", flush=True)
print()  # Final newline

Tool Call Extraction

import json

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What's the weather in London?"}],
    tools=[{"type": "function", "function": {"name": "get_weather", "parameters": {"type": "object", "properties": {"city": {"type": "string"}}}}}],
)

if response.choices[0].finish_reason == "tool_calls":
    for tool_call in response.choices[0].message.tool_calls:
        name = tool_call.function.name
        args = json.loads(tool_call.function.arguments)
        print(f"Call {name} with {args}")

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment