Implementation:Cohere ai Cohere python Generation Model
Appearance
| Knowledge Sources | |
|---|---|
| Domains | SDK, Text Generation |
| Last Updated | 2026-02-15 14:00 GMT |
Overview
Generation is a Pydantic model representing the non-streaming response from the Cohere Generate API, containing the generated text results and associated metadata.
Description
The Generation class encapsulates the complete response from the Cohere Generate API when called without streaming. It contains:
- id: A unique identifier for the generation request
- prompt: The original prompt text used for generation (optional, may not always be returned)
- generations: A list of SingleGeneration objects, where each entry represents one generated result. When
num_generationsis greater than 1, this list contains multiple results. Each SingleGeneration includes its ownid,text, optionalindex, optionallikelihood, and optionaltoken_likelihoods. - meta: Optional API metadata including token counts and warnings
The class extends UncheckedBaseModel and is auto-generated by the Fern API definition toolchain.
Usage
Use Generation when working with the non-streaming Cohere Generate API response. This is the response type returned by co.generate() when stream is not set or is set to False.
Code Reference
Source Location
- Repository: Cohere Python SDK
- File:
src/cohere/types/generation.py
Signature
class Generation(UncheckedBaseModel):
id: str
prompt: typing.Optional[str] = None
generations: typing.List[SingleGeneration]
meta: typing.Optional[ApiMeta] = None
Import
from cohere.types import Generation
I/O Contract
Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id |
str |
Yes | -- | Unique identifier for the generation request |
prompt |
Optional[str] |
No | None |
The prompt used for generation |
generations |
List[SingleGeneration] |
Yes | -- | List of generated results; each contains id, text, optional index, optional likelihood, and optional token_likelihoods
|
meta |
Optional[ApiMeta] |
No | None |
API metadata including token counts and warnings |
SingleGeneration Fields (Nested)
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id |
str |
Yes | -- | Unique identifier for this individual generation |
text |
str |
Yes | -- | The generated text output |
index |
Optional[int] |
No | None |
The generation index (present when num_generations > 1)
|
likelihood |
Optional[float] |
No | None |
Average log-likelihood of the generated text (when return_likelihoods is set)
|
token_likelihoods |
Optional[List[SingleGenerationTokenLikelihoodsItem]] |
No | None |
Per-token log-likelihoods (when return_likelihoods is set)
|
Usage Examples
Basic Text Generation
import cohere
co = cohere.Client()
response = co.generate(
prompt="Write a tagline for a coffee shop:",
model="command",
max_tokens=50,
)
print(f"Request ID: {response.id}")
print(f"Prompt: {response.prompt}")
print(f"Generated text: {response.generations[0].text}")
Multiple Generations
import cohere
co = cohere.Client()
response = co.generate(
prompt="Suggest a name for a new programming language:",
model="command",
max_tokens=30,
num_generations=5,
)
print(f"Generated {len(response.generations)} options:")
for gen in response.generations:
print(f" [{gen.index}] {gen.text.strip()}")
Generation with Likelihoods
import cohere
co = cohere.Client()
response = co.generate(
prompt="The capital of France is",
model="command",
max_tokens=10,
return_likelihoods="GENERATION",
)
gen = response.generations[0]
print(f"Text: {gen.text}")
print(f"Average likelihood: {gen.likelihood}")
if gen.token_likelihoods:
for token_info in gen.token_likelihoods:
print(f" Token: {token_info.token!r} -> likelihood: {token_info.likelihood}")
Related Pages
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment