Implementation:Langchain ai Langgraph Storm Example
| Knowledge Sources | |
|---|---|
| Domains | Examples, Multi_Agent |
| Last Updated | 2026-02-11 16:00 GMT |
Overview
The STORM example implements a multi-agent research system that generates comprehensive Wikipedia-style articles through perspective-driven question generation, expert interviews, and iterative outline refinement.
Description
This example demonstrates the STORM (Synthesis of Topic Outlines through Retrieval and Multi-perspective questioning) research pattern using LangGraph. The system orchestrates multiple AI agents across a pipeline of research stages: initial outline generation, related topic discovery, perspective-based editor selection, parallel expert interviews, outline refinement, reference indexing, section writing, and final article compilation.
The architecture uses two nested LangGraph `StateGraph` instances. The inner graph (`interview_graph`) manages a single interview conversation between an editor persona and a subject-matter expert. The editor asks questions from their unique perspective, the expert generates search queries via Tavily, retrieves web results, and formulates cited answers. The conversation continues for up to 5 turns or until the editor signals completion. The outer graph (`builder_of_storm`) orchestrates the full research pipeline: it initializes research by generating an outline and surveying related Wikipedia topics in parallel, conducts all interviews concurrently using `abatch`, refines the outline based on interview insights, indexes all cited references into a vector store (SKLearnVectorStore), writes each section using retrieved context, and compiles the final article.
Key data models include `Perspectives` (a collection of `Editor` personas with affiliations and roles), `InterviewState` (messages, references, and editor for a single interview), and `ResearchState` (the full pipeline state including topic, outline, editors, interview results, sections, and final article). The system uses OpenAI models (`gpt-4o-mini` for fast operations, `gpt-4o` for long-context tasks) and Pydantic structured outputs throughout.
Usage
Use this example as a reference implementation for building multi-agent research systems with LangGraph. It demonstrates several advanced patterns: parallel sub-graph execution, perspective-driven question generation, tool-augmented expert responses, iterative refinement of structured outputs, and vector store-backed section writing. Invoke the compiled graph with a topic string to generate a complete research article.
Code Reference
Source Location
- Repository: Langchain_ai_Langgraph
- File: libs/cli/examples/graphs/storm.py
Signature
# Data Models
class Perspectives(BaseModel):
editors: list[Editor]
class RelatedSubjects(BaseModel):
topics: list[str]
class Editor(BaseModel):
affiliation: str
name: str
role: str
description: str
class InterviewState(TypedDict):
messages: Annotated[list[AnyMessage], add_messages]
references: Annotated[dict | None, update_references]
editor: Annotated[Editor | None, update_editor]
class ResearchState(TypedDict):
topic: str
outline: Outline
editors: list[Editor]
interview_results: list[InterviewState]
sections: list[WikiSection]
article: str
# Key functions
async def gen_answer(state: InterviewState, config: RunnableConfig | None = None, name: str = "Subject_Matter_Expert", max_str_len: int = 15000) -> dict: ...
def route_messages(state: InterviewState, name: str = "Subject_Matter_Expert") -> str: ...
async def generate_question(state: InterviewState) -> dict: ...
async def refine_outline(state: ResearchState) -> dict: ...
async def write_sections(state: ResearchState) -> dict: ...
async def write_article(state: ResearchState) -> dict: ...
# Compiled graphs
interview_graph = builder.compile() # Inner interview sub-graph
graph = builder_of_storm.compile() # Outer research pipeline
Import
# This is an example file; import the compiled graph directly:
from libs.cli.examples.graphs.storm import graph
I/O Contract
| Outer Graph (ResearchState) | |||
|---|---|---|---|
| Field | Type | Direction | Description |
| `topic` | `str` | Input | The research topic to generate an article about |
| `outline` | `Outline` | Internal | Structured outline with sections and subsections |
| `editors` | `list[Editor]` | Internal | Generated editor personas with diverse perspectives |
| `interview_results` | `list[InterviewState]` | Internal | Results from all parallel expert interviews |
| `sections` | `list[WikiSection]` | Internal | Written sections with citations |
| `article` | `str` | Output | Final compiled Wikipedia-style article |
| Inner Graph (InterviewState) | |||
|---|---|---|---|
| Field | Type | Direction | Description |
| `messages` | `list[AnyMessage]` | I/O | Conversation history between editor and expert |
| `references` | None` | Output | URL-to-content mapping of cited references |
| `editor` | None` | Input | The editor persona driving the interview |
| Pipeline Stages | |
|---|---|
| Stage | Description |
| `init_research` | Generate initial outline and survey related Wikipedia topics in parallel |
| `conduct_interviews` | Run all editor-expert interviews concurrently via `abatch` |
| `refine_outline` | Refine the outline based on interview conversations |
| `index_references` | Index all cited references into a vector store |
| `write_sections` | Write each section using retrieved reference context |
| `write_article` | Compile all sections into a final cohesive article |
Usage Examples
import asyncio
from libs.cli.examples.graphs.storm import graph
async def main():
# Run the full STORM research pipeline
result = await graph.ainvoke({"topic": "Artificial General Intelligence"})
# Access the final article
print(result["article"])
# Access the refined outline
print(result["outline"].as_str)
# Access individual sections
for section in result["sections"]:
print(section.as_str)
asyncio.run(main())