Implementation:BerriAI Litellm Completion Request Types
| Knowledge Sources | BerriAI/litellm - litellm/types/completion.py, BerriAI/litellm - litellm/types/llms/openai.py |
|---|---|
| Domains | LLM Integration, Request Formatting, Type Safety |
| Last Updated | 2026-02-15 |
Overview
Concrete tool for constructing standardized chat completion requests using OpenAI-compatible message formats, provided by the litellm Python package via typed dictionaries and Pydantic models in litellm/types/completion.py and litellm/types/llms/openai.py.
Description
LiteLLM defines a hierarchy of TypedDict classes that mirror the OpenAI Chat Completions API message schema. These types provide static type checking and documentation for building messages with roles (system, user, assistant, tool, function) and structured content parts (text, images, tool calls). A CompletionRequest Pydantic model aggregates these messages with all standard generation parameters into a single validated request object.
The types in litellm/types/llms/openai.py re-export and extend OpenAI SDK types such as ChatCompletionAudioParam, ChatCompletionModality, and ChatCompletionPredictionContentParam for use in function signatures.
Usage
Use these types when constructing messages lists for litellm.completion() or litellm.acompletion(), or when type-annotating your own functions that build LLM requests.
Code Reference
Source Location: litellm/types/completion.py, lines 1-193
Key Type Signatures:
class ChatCompletionSystemMessageParam(TypedDict, total=False):
content: Required[str]
role: Required[Literal["system"]]
name: str
class ChatCompletionUserMessageParam(TypedDict, total=False):
content: Required[Union[str, Iterable[ChatCompletionContentPartParam]]]
role: Required[Literal["user"]]
name: str
class ChatCompletionAssistantMessageParam(TypedDict, total=False):
role: Required[Literal["assistant"]]
content: Optional[str]
function_call: FunctionCall
name: str
tool_calls: Iterable[ChatCompletionMessageToolCallParam]
class ChatCompletionToolMessageParam(TypedDict, total=False):
content: Required[Union[str, Iterable[ChatCompletionContentPartParam]]]
role: Required[Literal["tool"]]
tool_call_id: Required[str]
class ChatCompletionFunctionMessageParam(TypedDict, total=False):
content: Required[Union[str, Iterable[ChatCompletionContentPartParam]]]
name: Required[str]
role: Required[Literal["function"]]
ChatCompletionMessageParam = Union[
ChatCompletionSystemMessageParam,
ChatCompletionUserMessageParam,
ChatCompletionAssistantMessageParam,
ChatCompletionFunctionMessageParam,
ChatCompletionToolMessageParam,
]
class CompletionRequest(BaseModel):
model: str
messages: List[ChatCompletionMessageParam] = []
timeout: Optional[Union[float, int]] = None
temperature: Optional[float] = None
top_p: Optional[float] = None
n: Optional[int] = None
stream: Optional[bool] = None
stop: Optional[dict] = None
max_tokens: Optional[int] = None
presence_penalty: Optional[float] = None
frequency_penalty: Optional[float] = None
logit_bias: Optional[dict] = None
user: Optional[str] = None
response_format: Optional[dict] = None
seed: Optional[int] = None
tools: Optional[List[str]] = None
tool_choice: Optional[str] = None
logprobs: Optional[bool] = None
top_logprobs: Optional[int] = None
deployment_id: Optional[str] = None
functions: Optional[List[str]] = None
function_call: Optional[str] = None
base_url: Optional[str] = None
api_version: Optional[str] = None
api_key: Optional[str] = None
model_list: Optional[List[str]] = None
Content Part Types:
class ChatCompletionContentPartTextParam(TypedDict, total=False):
text: Required[str]
type: Required[Literal["text"]]
class ImageURL(TypedDict, total=False):
url: Required[str]
detail: Literal["auto", "low", "high"]
class ChatCompletionContentPartImageParam(TypedDict, total=False):
image_url: Required[ImageURL]
type: Required[Literal["image_url"]]
ChatCompletionContentPartParam = Union[
ChatCompletionContentPartTextParam,
ChatCompletionContentPartImageParam,
]
Import:
from litellm.types.completion import (
ChatCompletionMessageParam,
ChatCompletionSystemMessageParam,
ChatCompletionUserMessageParam,
ChatCompletionAssistantMessageParam,
ChatCompletionToolMessageParam,
CompletionRequest,
)
I/O Contract
Inputs
| Parameter | Type | Description |
|---|---|---|
role |
Literal["system", "user", "assistant", "tool", "function"] |
Required. The role of the message author. Determines which TypedDict applies.
|
content |
str or Iterable[ChatCompletionContentPartParam] |
Required for most roles. The message content, either plain text or a list of content parts for multimodal input. |
tool_call_id |
str |
Required for tool messages. The ID of the tool call this message responds to. |
tool_calls |
Iterable[ChatCompletionMessageToolCallParam] |
Optional for assistant messages. The tool calls the model generated. |
name |
str |
Optional. A name for the participant to differentiate between participants of the same role. |
Outputs
| Output | Type | Description |
|---|---|---|
| Message dictionaries | ChatCompletionMessageParam |
Typed dictionaries conforming to the OpenAI Chat Completions message format, ready to be passed to litellm.completion().
|
CompletionRequest |
Pydantic BaseModel |
A validated request object that bundles messages with generation parameters. |
Usage Examples
Simple text conversation:
import litellm
from litellm.types.completion import (
ChatCompletionSystemMessageParam,
ChatCompletionUserMessageParam,
)
messages = [
ChatCompletionSystemMessageParam(role="system", content="You are a helpful assistant."),
ChatCompletionUserMessageParam(role="user", content="What is the capital of France?"),
]
response = litellm.completion(model="gpt-4", messages=messages)
Multimodal message with image:
from litellm.types.completion import (
ChatCompletionUserMessageParam,
ChatCompletionContentPartTextParam,
ChatCompletionContentPartImageParam,
ImageURL,
)
messages = [
ChatCompletionUserMessageParam(
role="user",
content=[
ChatCompletionContentPartTextParam(type="text", text="What is in this image?"),
ChatCompletionContentPartImageParam(
type="image_url",
image_url=ImageURL(url="https://example.com/image.png", detail="high"),
),
],
)
]
Tool call response:
from litellm.types.completion import ChatCompletionToolMessageParam
tool_response = ChatCompletionToolMessageParam(
role="tool",
content='{"temperature": 72, "unit": "fahrenheit"}',
tool_call_id="call_abc123",
)