Implementation:Googleapis Python genai Models Generate Content
| Knowledge Sources | |
|---|---|
| Domains | NLP, Generative_AI |
| Last Updated | 2026-02-15 00:00 GMT |
Overview
Concrete tool for generating text responses from Gemini models in both unary and streaming modes provided by the google-genai models module.
Description
Models.generate_content sends content to a specified Gemini model and returns a complete response. Models.generate_content_stream does the same but returns an iterator of partial responses for real-time display. Both methods accept the same parameters: a model identifier, content input, and optional configuration. The SDK handles dual-backend routing (Gemini Developer API vs Vertex AI), request transformation, authentication, and retry logic transparently.
Usage
Use generate_content for standard request-response patterns where you process the complete output. Use generate_content_stream for chat interfaces, real-time displays, or when dealing with long outputs where incremental delivery improves user experience.
Code Reference
Source Location
- Repository: googleapis/python-genai
- File: google/genai/models.py
- Lines: L5507-5666 (generate_content), L5668-5861 (generate_content_stream)
Signature
class Models:
def generate_content(
self,
*,
model: str,
contents: types.ContentListUnionDict,
config: Optional[types.GenerateContentConfigOrDict] = None,
) -> types.GenerateContentResponse:
"""Generates content from a model.
Args:
model: Model resource ID (e.g., 'gemini-2.0-flash').
contents: Input content - can be a string, Part, Content, or list thereof.
config: Optional generation configuration.
"""
def generate_content_stream(
self,
*,
model: str,
contents: types.ContentListUnionDict,
config: Optional[types.GenerateContentConfigOrDict] = None,
) -> Iterator[types.GenerateContentResponse]:
"""Generates content with streaming response.
Args:
model: Model resource ID (e.g., 'gemini-2.0-flash').
contents: Input content.
config: Optional generation configuration.
"""
Import
from google import genai
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| model | str | Yes | Model resource ID (e.g., 'gemini-2.0-flash', 'gemini-1.5-pro') |
| contents | ContentListUnionDict | Yes | Input content: string, Part, Content, list[Content], or dict equivalent |
| config | Optional[GenerateContentConfigOrDict] | No | Generation configuration (temperature, system_instruction, etc.) |
Outputs
| Name | Type | Description |
|---|---|---|
| generate_content returns | GenerateContentResponse | Complete response with .text, .parts, .candidates, .usage_metadata |
| generate_content_stream returns | Iterator[GenerateContentResponse] | Iterator of partial responses, each with incremental content |
Usage Examples
Unary Generation
from google import genai
client = genai.Client(api_key="YOUR_API_KEY")
response = client.models.generate_content(
model="gemini-2.0-flash",
contents="Explain the theory of relativity in simple terms."
)
print(response.text)
Streaming Generation
from google import genai
client = genai.Client(api_key="YOUR_API_KEY")
for chunk in client.models.generate_content_stream(
model="gemini-2.0-flash",
contents="Write a short story about a robot learning to paint."
):
print(chunk.text, end="")
Async Generation
import asyncio
from google import genai
async def main():
client = genai.Client(api_key="YOUR_API_KEY")
response = await client.aio.models.generate_content(
model="gemini-2.0-flash",
contents="What is machine learning?"
)
print(response.text)
asyncio.run(main())