Implementation:Googleapis Python genai Chat Send Message
| Knowledge Sources | |
|---|---|
| Domains | NLP, Conversational_AI |
| Last Updated | 2026-02-15 00:00 GMT |
Overview
Concrete tool for sending messages and receiving responses within a chat session provided by the google-genai chats module.
Description
Chat.send_message sends a user message within a chat session. It automatically appends the user message and model response to the internal history, then calls Models.generate_content with the full conversation context. Chat.send_message_stream provides the same functionality with streaming response delivery. Both methods accept an optional config parameter to override session-level settings for that specific turn.
Usage
Call chat.send_message for each conversation turn. Pass a string, Part, or list of Parts as the message. Use send_message_stream for streaming responses. Use the config parameter to adjust generation settings for individual turns.
Code Reference
Source Location
- Repository: googleapis/python-genai
- File: google/genai/chats.py
- Lines: L223-273 (send_message), L275-334 (send_message_stream)
Signature
class Chat:
def send_message(
self,
message: Union[list[PartUnionDict], PartUnionDict],
config: Optional[GenerateContentConfigOrDict] = None,
) -> GenerateContentResponse:
"""Sends a message and returns the model's response.
Args:
message: User message content (string, Part, or list of Parts).
config: Optional per-message config override.
"""
def send_message_stream(
self,
message: Union[list[PartUnionDict], PartUnionDict],
config: Optional[GenerateContentConfigOrDict] = None,
) -> Iterator[GenerateContentResponse]:
"""Sends a message with streaming response.
Args:
message: User message content.
config: Optional per-message config override.
"""
Import
from google import genai
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| message | Union[list[PartUnionDict], PartUnionDict] | Yes | User message: string, Part, or list of Parts |
| config | Optional[GenerateContentConfigOrDict] | No | Per-message config override (temperature, tools, etc.) |
Outputs
| Name | Type | Description |
|---|---|---|
| send_message returns | GenerateContentResponse | Complete model response with .text, .parts, etc. |
| send_message_stream returns | Iterator[GenerateContentResponse] | Stream of partial responses |
Usage Examples
Basic Conversation
from google import genai
client = genai.Client(api_key="YOUR_API_KEY")
chat = client.chats.create(model="gemini-2.0-flash")
# Send messages
r1 = chat.send_message("Hello! What can you help me with?")
print(r1.text)
r2 = chat.send_message("Tell me about machine learning.")
print(r2.text)
r3 = chat.send_message("Can you elaborate on neural networks?")
print(r3.text)
Streaming Response
chat = client.chats.create(model="gemini-2.0-flash")
for chunk in chat.send_message_stream("Write a poem about coding."):
print(chunk.text, end="")
Per-Message Config Override
from google.genai import types
chat = client.chats.create(model="gemini-2.0-flash")
# Normal response
r1 = chat.send_message("Give me a fact about space.")
print(r1.text)
# Creative response with higher temperature
r2 = chat.send_message(
"Now write a creative story about that fact.",
config=types.GenerateContentConfig(temperature=1.0)
)
print(r2.text)