Principle:Langchain ai Langgraph Graph Execution
| Metadata | Value |
|---|---|
| Type | Principle |
| Library | langgraph |
| Source | libs/langgraph/langgraph/pregel/main.py
|
| Workflow | Building_a_Stateful_Graph |
Overview
Graph execution is the final step in the Building a Stateful Graph workflow: running the compiled graph with input data and collecting output. LangGraph uses the Pregel algorithm (Bulk Synchronous Parallel model) to execute nodes in discrete supersteps, where nodes within each step run in parallel and channel updates become visible only in the next step.
Description
The Pregel Execution Model
The compiled graph is an instance of Pregel, which organizes execution into a loop of supersteps. Each superstep has three phases:
- Plan -- Determine which nodes should execute. In the first superstep, this includes nodes triggered by the
STARTchannel (i.e., nodes connected fromSTART). In subsequent supersteps, this includes nodes whose trigger channels were written to in the previous step. - Execute -- Run all planned nodes in parallel. During this phase, channel updates from executing nodes are buffered and not yet visible to other nodes in the same step.
- Update -- Apply all buffered channel updates. This makes the new state visible for the next superstep's planning phase.
The loop continues until:
- No nodes are triggered (the graph has reached a quiescent state, typically when execution reaches
END). - A maximum step limit is reached.
- An interrupt is triggered.
Invoke vs. Stream
LangGraph offers two primary execution interfaces:
invoke()-- Runs the graph to completion and returns the final state. This is a batch operation suitable when you need the complete result.stream()-- Returns an iterator that yields intermediate results as the graph executes. This enables real-time feedback, progress monitoring, and token-by-token LLM streaming.
Both methods accept the same configuration parameters. Internally, invoke() calls stream() and collects the results.
Stream Modes
The stream_mode parameter controls what data is emitted:
"values"-- Emits the full state after each superstep."updates"-- Emits only the node names and their partial state updates."custom"-- Emits custom data written by nodes usingStreamWriter."messages"-- Emits LLM tokens with metadata for any LLM invocations inside nodes."checkpoints"-- Emits events when checkpoints are created."tasks"-- Emits events when tasks start and finish."debug"-- Emits detailed debug information for each step.
Multiple stream modes can be active simultaneously by passing a list.
Channels and State Propagation
During execution, the state is stored in channels -- typed containers that hold values between supersteps. Each node reads from its trigger channels, processes the data, and writes updates to output channels. The channel system ensures:
- Isolation -- Nodes in the same superstep cannot see each other's writes.
- Determinism -- Given the same input and configuration, the graph produces the same output regardless of parallel execution order.
- Reducer application -- Channels with reducers (e.g.,
BinaryOperatorAggregate) automatically merge concurrent writes.
Persistence and Resumption
When a checkpointer is configured, the graph saves its state after each superstep. This enables:
- Pause and resume -- Interrupt the graph at specific nodes and resume later.
- Time travel -- Replay from any previously saved checkpoint.
- Fault tolerance -- Recover from failures by restarting from the last checkpoint.
Durability Modes
The durability parameter controls when checkpoint saves occur:
"sync"-- Saves are persisted synchronously before the next step begins."async"-- Saves happen asynchronously while the next step executes (default)."exit"-- Saves happen only when the graph finishes.
Usage
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END, MessagesState
class State(TypedDict):
x: int
def increment(state: State) -> dict:
return {"x": state["x"] + 1}
builder = StateGraph(State)
builder.add_node(increment)
builder.add_edge(START, "increment")
builder.add_edge("increment", END)
graph = builder.compile()
# Batch execution
result = graph.invoke({"x": 0})
# {'x': 1}
# Streaming execution
for event in graph.stream({"x": 0}, stream_mode="updates"):
print(event)
# {'increment': {'x': 1}}
Theoretical Basis
- Bulk Synchronous Parallel (BSP) -- The Pregel execution model is directly inspired by Google's Pregel system for large-scale graph processing. In BSP, computation proceeds in synchronized supersteps where all workers execute in parallel, exchange messages, and synchronize at a barrier before the next step.
- Actor model -- Each node behaves as an actor that processes messages (state reads) and produces responses (state writes). The Pregel engine acts as the message-passing infrastructure.
- Dataflow programming -- The graph defines a dataflow where data flows through channels between processing nodes. Execution is driven by data availability (trigger channels) rather than explicit control flow.
- Event sourcing -- With checkpointing enabled, the execution history is captured as a sequence of state snapshots, enabling replay and time-travel debugging.
Related Pages
- Implementation:Langchain_ai_Langgraph_Pregel_Invoke
- Heuristic:Langchain_ai_Langgraph_Retry_Policy_Configuration
- Heuristic:Langchain_ai_Langgraph_Recursion_Limit_Tuning
- Heuristic:Langchain_ai_Langgraph_Stream_Mode_Selection
- Langchain_ai_Langgraph_Graph_Compilation
- Langchain_ai_Langgraph_Edge_Configuration
- Langchain_ai_Langgraph_State_Schema_Definition