Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Groq Groq python ChatCompletion Response

From Leeroopedia
Knowledge Sources
Domains NLP, Data_Parsing
Last Updated 2026-02-15 16:00 GMT

Overview

Concrete Pydantic models for parsing and accessing chat completion response data in the Groq Python SDK.

Description

The ChatCompletion class is a Pydantic BaseModel that represents the full response from a synchronous chat completion request. It contains a list of Choice objects, each with a ChatCompletionMessage that holds the generated text content and optional tool calls. The model also includes Groq-specific metadata via the XGroq field.

Usage

This model is automatically returned by client.chat.completions.create() when stream=False. Access generated content via response.choices[0].message.content. Check finish_reason to determine if the model completed naturally, hit a token limit, or requested tool calls.

Code Reference

Source Location

  • Repository: groq-python
  • File: src/groq/types/chat/chat_completion.py
  • Lines: L151-197 (ChatCompletion), L32-49 (Choice)

Signature

class ChatCompletion(BaseModel):
    id: str
    choices: List[Choice]
    created: int
    model: str
    object: Literal["chat.completion"]
    system_fingerprint: Optional[str] = None
    usage: Optional[CompletionUsage] = None
    x_groq: Optional[XGroq] = None

class Choice(BaseModel):
    finish_reason: Literal["stop", "length", "tool_calls", "function_call"]
    index: int
    logprobs: Optional[ChoiceLogprobs] = None
    message: ChatCompletionMessage

Import

from groq.types.chat import ChatCompletion

I/O Contract

Inputs

Name Type Required Description
(object) ChatCompletion Yes Returned automatically from Completions.create()

Outputs

Name Type Description
choices[i].message.content Optional[str] Generated text response
choices[i].message.tool_calls Optional[List[ChatCompletionMessageToolCall]] Tool call requests from the model
choices[i].finish_reason Literal["stop", "length", "tool_calls", "function_call"] Why generation stopped
usage Optional[CompletionUsage] Token usage stats (prompt_tokens, completion_tokens, total_tokens)
x_groq Optional[XGroq] Groq-specific metadata (request ID, cache stats)

Usage Examples

Extract Text Content

from groq import Groq

client = Groq()
response = client.chat.completions.create(
    messages=[{"role": "user", "content": "Hello!"}],
    model="llama-3.3-70b-versatile",
)

# Extract the generated text
text = response.choices[0].message.content
print(text)

# Check why generation stopped
print(response.choices[0].finish_reason)  # "stop"

# Token usage
if response.usage:
    print(f"Prompt: {response.usage.prompt_tokens}, "
          f"Completion: {response.usage.completion_tokens}")

Handle Tool Calls

response = client.chat.completions.create(
    messages=[{"role": "user", "content": "What is the weather?"}],
    model="llama-3.3-70b-versatile",
    tools=[{
        "type": "function",
        "function": {
            "name": "get_weather",
            "parameters": {"type": "object", "properties": {"city": {"type": "string"}}}
        }
    }],
)

if response.choices[0].finish_reason == "tool_calls":
    for tool_call in response.choices[0].message.tool_calls:
        print(f"Call: {tool_call.function.name}({tool_call.function.arguments})")

Related Pages

Implements Principle

Requires Environment

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment