Implementation:Langchain ai Langchain Runnable Astream Events
Appearance
| Knowledge Sources | |
|---|---|
| Domains | Streaming, Observability |
| Last Updated | 2026-02-11 00:00 GMT |
Overview
Concrete tool for streaming structured lifecycle events from LCEL chains provided by langchain-core.
Description
The Runnable.astream_events() method generates StreamEvent objects for every Runnable in a chain. It supports filtering by name, type, and tag. Version "v2" is recommended and includes parent_ids for tracing event hierarchies.
Usage
Call astream_events() on any Runnable or chain. Use include_names/include_types to filter for specific components.
Code Reference
Source Location
- Repository: langchain
- File: libs/core/langchain_core/runnables/base.py
- Lines: L1273-1517
Signature
async def astream_events(
self,
input: Any,
config: RunnableConfig | None = None,
*,
version: Literal["v1", "v2"] = "v2",
include_names: Sequence[str] | None = None,
include_types: Sequence[str] | None = None,
include_tags: Sequence[str] | None = None,
exclude_names: Sequence[str] | None = None,
exclude_types: Sequence[str] | None = None,
exclude_tags: Sequence[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[StreamEvent]:
Import
# Available on any Runnable (ChatOpenAI, chains, etc.)
from langchain_openai import ChatOpenAI
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| input | Any | Yes | Input to the Runnable/chain |
| version | Literal["v1", "v2"] | No (default: "v2") | Event schema version (use "v2") |
| include_names | Sequence[str] or None | No | Filter events by Runnable name |
| include_types | Sequence[str] or None | No | Filter events by Runnable type |
Outputs
| Name | Type | Description |
|---|---|---|
| return | AsyncIterator[StreamEvent] | Events: on_{type}_{start/stream/end} with data containing input, chunk, or output |
Usage Examples
Streaming Events from a Chain
import asyncio
from langchain_openai import ChatOpenAI
async def main():
llm = ChatOpenAI(model="gpt-4o-mini")
async for event in llm.astream_events("Hello!", version="v2"):
kind = event["event"]
if kind == "on_chat_model_stream":
chunk = event["data"]["chunk"]
print(chunk.content, end="", flush=True)
elif kind == "on_chat_model_end":
print(f"\nDone! Run ID: {event['run_id']}")
asyncio.run(main())
Related Pages
Implements Principle
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment