Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Openai Openai python Completions Create

From Leeroopedia
Knowledge Sources
Domains NLP, Text_Generation
Last Updated 2026-02-15 00:00 GMT

Overview

Concrete tool for generating chat completions with streaming, structured output, and tool-calling support provided by the OpenAI Python SDK.

Description

The Completions resource class provides create() and parse() methods for sending chat completion requests. The create() method handles standard and streaming completions, while parse() adds structured output parsing using Pydantic models. The companion pydantic_function_tool() helper converts Pydantic models into tool definitions for function calling.

Usage

Use create() for standard text generation and streaming. Use parse() when you need the response to conform to a specific schema (structured outputs). Use pydantic_function_tool() to define tools from Pydantic models for tool-calling workflows.

Code Reference

Source Location

  • Repository: openai-python
  • File: src/openai/resources/chat/completions/completions.py
  • Lines: L60-1543 (Completions class with create/parse overloads)
  • File: src/openai/lib/_tools.py
  • Lines: L40-66 (pydantic_function_tool)

Signature

class Completions(SyncAPIResource):
    def create(
        self,
        *,
        messages: Iterable[ChatCompletionMessageParam],
        model: Union[str, ChatModel],
        stream: Optional[Literal[False]] | Literal[True] = False,
        temperature: Optional[float] | NotGiven = NOT_GIVEN,
        max_completion_tokens: Optional[int] | NotGiven = NOT_GIVEN,
        tools: Iterable[ChatCompletionToolParam] | NotGiven = NOT_GIVEN,
        response_format: ResponseFormat | NotGiven = NOT_GIVEN,
        top_p: Optional[float] | NotGiven = NOT_GIVEN,
        n: Optional[int] | NotGiven = NOT_GIVEN,
        stop: Union[Optional[str], List[str]] | NotGiven = NOT_GIVEN,
        presence_penalty: Optional[float] | NotGiven = NOT_GIVEN,
        frequency_penalty: Optional[float] | NotGiven = NOT_GIVEN,
        seed: Optional[int] | NotGiven = NOT_GIVEN,
        user: str | NotGiven = NOT_GIVEN,
        # ... additional parameters
    ) -> ChatCompletion | Stream[ChatCompletionChunk]:
        """
        Creates a model response for the given chat conversation.
        """

    def parse(
        self,
        *,
        messages: Iterable[ChatCompletionMessageParam],
        model: Union[str, ChatModel],
        response_format: type[ResponseFormatT],
        tools: Iterable[ChatCompletionToolParam] | NotGiven = NOT_GIVEN,
        # ... inherits most create() parameters
    ) -> ParsedChatCompletion[ResponseFormatT]:
        """
        Wrapper around create() that parses the response into a Pydantic model.
        """
def pydantic_function_tool(
    model: type[pydantic.BaseModel],
    *,
    name: str | None = None,
    description: str | None = None,
) -> ChatCompletionToolParam:
    """
    Converts a Pydantic model into a tool definition for function calling.
    """

Import

from openai import OpenAI
from openai.lib import pydantic_function_tool

I/O Contract

Inputs

Name Type Required Description
messages Iterable[ChatCompletionMessageParam] Yes Conversation messages
model Union[str, ChatModel] Yes Model ID (e.g., "gpt-4o", "gpt-4o-mini")
stream bool No Enable SSE streaming (default False)
temperature float No Sampling temperature 0-2 (default 1)
max_completion_tokens int No Maximum tokens to generate
tools Iterable[ChatCompletionToolParam] No Tool definitions for function calling
response_format type or ResponseFormat No Pydantic model for structured outputs
top_p float No Nucleus sampling parameter
n int No Number of completions to generate
stop list[str] No Stop sequences
seed int No Seed for deterministic outputs

Outputs

Name Type Description
response (non-streaming) ChatCompletion Complete response with choices, usage stats
stream (streaming) Stream[ChatCompletionChunk] Iterator of incremental chunks
parsed (structured) ParsedChatCompletion[T] Response parsed into Pydantic model T

Usage Examples

Standard Completion

from openai import OpenAI

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain quantum computing briefly."},
    ],
    temperature=0.7,
    max_completion_tokens=500,
)
print(response.choices[0].message.content)

Streaming

stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Write a haiku."}],
    stream=True,
)
for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

Structured Output with parse()

from pydantic import BaseModel
from openai import OpenAI

class CalendarEvent(BaseModel):
    name: str
    date: str
    participants: list[str]

client = OpenAI()
completion = client.chat.completions.parse(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "Extract event info."},
        {"role": "user", "content": "Alice and Bob meet for lunch on Friday."},
    ],
    response_format=CalendarEvent,
)
event = completion.choices[0].message.parsed
print(event.name, event.date, event.participants)

Tool Calling

from pydantic import BaseModel
from openai import OpenAI
from openai.lib import pydantic_function_tool

class GetWeather(BaseModel):
    """Get the current weather for a city."""
    city: str
    unit: str = "celsius"

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What's the weather in Tokyo?"}],
    tools=[pydantic_function_tool(GetWeather)],
)
tool_call = response.choices[0].message.tool_calls[0]
print(tool_call.function.name, tool_call.function.arguments)

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment