Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Anthropics Anthropic sdk python CompletionCreateParams

From Leeroopedia
Knowledge Sources
Domains API Types, Legacy Completions
Last Updated 2026-02-15 12:00 GMT

Overview

CompletionCreateParams is a collection of TypedDict classes that define the request parameters for the legacy Text Completions API. It provides separate types for streaming and non-streaming requests, with a shared base class containing common parameters for model selection, prompt formatting, sampling controls, and stop sequences.

Description

The module defines a hierarchy of typed dictionaries:

  1. CompletionCreateParamsBase -- The shared base containing all common parameters:
    • max_tokens_to_sample (required) -- Maximum number of tokens to generate.
    • model (required) -- The model identifier.
    • prompt (required) -- The prompt string, formatted with \n\nHuman: and \n\nAssistant: turns.
    • metadata -- Optional request metadata.
    • stop_sequences -- Additional sequences that cause generation to stop.
    • temperature -- Randomness control (0.0 to 1.0).
    • top_k -- Top-K sampling parameter.
    • top_p -- Nucleus sampling parameter.
    • betas -- Optional beta version headers.
  1. CompletionCreateParamsNonStreaming -- Extends base with stream: Literal[False].
  1. CompletionCreateParamsStreaming -- Extends base with stream: Required[Literal[True]].
  1. CompletionCreateParams -- A union type alias of the streaming and non-streaming variants.

The module also includes several deprecated aliases for backward compatibility: Metadata, CompletionRequestNonStreaming, CompletionRequestStreaming, CompletionRequestStreamingMetadata, and CompletionRequestNonStreamingMetadata.

Usage

Use these parameter types when calling client.completions.create() on the legacy Text Completions API. Note that this API is legacy; the Messages API is recommended for new development.

Code Reference

Source Location

Signature

class CompletionCreateParamsBase(TypedDict, total=False):
    max_tokens_to_sample: Required[int]
    model: Required[ModelParam]
    prompt: Required[str]
    metadata: MetadataParam
    stop_sequences: SequenceNotStr[str]
    temperature: float
    top_k: int
    top_p: float
    betas: Annotated[List[AnthropicBetaParam], PropertyInfo(alias="anthropic-beta")]


class CompletionCreateParamsNonStreaming(CompletionCreateParamsBase, total=False):
    stream: Literal[False]


class CompletionCreateParamsStreaming(CompletionCreateParamsBase):
    stream: Required[Literal[True]]


CompletionCreateParams = Union[CompletionCreateParamsNonStreaming, CompletionCreateParamsStreaming]

Import

from anthropic.types import CompletionCreateParams
from anthropic.types import CompletionCreateParamsNonStreaming
from anthropic.types import CompletionCreateParamsStreaming

I/O Contract

CompletionCreateParamsBase Fields

Field Type Required Description
max_tokens_to_sample int Yes Maximum number of tokens to generate.
model ModelParam Yes The model to use for completion.
prompt str Yes The prompt string with \n\nHuman: and \n\nAssistant: turns.
metadata MetadataParam No Request metadata object.
stop_sequences SequenceNotStr[str] No Additional stop sequences.
temperature float No Randomness (0.0 to 1.0). Default 1.0.
top_k int No Top-K sampling. Advanced use only.
top_p float No Nucleus sampling. Advanced use only.
betas List[AnthropicBetaParam] No Beta version(s) to use (sent as anthropic-beta header).

Streaming Variants

Class Additional Field Description
CompletionCreateParamsNonStreaming stream: Literal[False] Non-streaming request. stream defaults to False.
CompletionCreateParamsStreaming stream: Required[Literal[True]] Streaming request. stream is required and must be True.

Usage Examples

import anthropic

client = anthropic.Anthropic()

# Non-streaming completion
completion = client.completions.create(
    model="claude-2.1",
    max_tokens_to_sample=256,
    prompt="\n\nHuman: Tell me a joke.\n\nAssistant:",
    temperature=0.7,
    stop_sequences=["\n\nHuman:"],
)
print(completion.completion)

# Streaming completion
stream = client.completions.create(
    model="claude-2.1",
    max_tokens_to_sample=256,
    prompt="\n\nHuman: Tell me a story.\n\nAssistant:",
    stream=True,
)
for event in stream:
    print(event.completion, end="", flush=True)

Related Pages

  • Completion -- The response model returned by the legacy Text Completions API.

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment