Implementation:Diagram of thought Diagram of thought Iterative Reasoner Prompt Loading
| Knowledge Sources | |
|---|---|
| Domains | Reasoning, Prompt_Engineering, LLM_Configuration |
| Last Updated | 2026-02-14 04:30 GMT |
Overview
Concrete artifact for configuring an LLM to perform DoT iterative reasoning by loading the iterative-reasoner.md system prompt.
Description
The iterative-reasoner.md file (67 lines) is the core prompt artifact of the Diagram of Thought framework. It serves as the API specification for DoT reasoning sessions -- the "API" is the prompt file itself, and its consumers are LLM endpoints that accept a system message parameter.
The file is organized into five structural sections:
- Role Declarations (L3-7): Introduces the three XML-tagged roles (
<proposer>,<critic>,<summarizer>) that the model must alternate between during reasoning. - Roles and Responsibilities (L9-34): Provides detailed objective descriptions, behavioral instructions, and output format requirements for each role. The proposer generates reasoning steps and builds upon validated propositions. The critic analyzes propositions for logical consistency and provides detailed natural language feedback. The summarizer synthesizes validated propositions, reviews the DAG of reasoning, and determines whether the reasoning is complete.
- Process Flow (L36-41): Defines the iterative cycle -- the proposer presents steps, the critic evaluates them, the summarizer assesses completeness, and the cycle repeats until the summarizer confirms a final answer.
- Formatting Guidelines (L43-48): Establishes quality constraints including clarity, logical progression, correct XML tag usage, and natural language explanations in critiques.
- Example Interaction (L50-67): Provides a concrete XML tag alternation pattern demonstrating the expected output structure (proposer, critic, proposer, critic, summarizer).
A minimal template (34 lines) is also available inline in README.md:L63-97. This variant adds the typed serialization protocol (@node, @edge, @status records) for formal auditability and DAG extraction, but omits the detailed role descriptions and formatting guidelines present in the full prompt.
Usage
Load this prompt as the system message when initializing an LLM session for DoT reasoning. The prompt is plain text and works with any LLM API that supports a system-level instruction:
- OpenAI API: Pass as the
"system"role message in themessagesarray. - Anthropic API: Pass as the
systemparameter in the messages request. - Open-source servers (vLLM, llama.cpp, Ollama, etc.): Pass as the system prompt in the chat template.
No additional dependencies, libraries, or middleware are required. Once the system prompt is loaded, the session is ready to accept a <problem> input and the model will begin the iterative reasoning cycle.
Code Reference
Source Location
- Repository: Diagram of Thought
- File:
prompts/iterative-reasoner.md - Lines: L1-67 (entire file)
Signature
The prompt defines the following structural contract:
# Roles (XML-tagged behavioral contracts)
#
# <proposer> (L12-18)
# Objective : Propose one or more reasoning steps towards solving the problem
# Behavior : Generate clear, concise propositions; build upon valid prior steps;
# incorporate critic feedback
# Output : Text enclosed in <proposer>...</proposer> tags
#
# <critic> (L20-26)
# Objective : Critically evaluate the proposer's reasoning steps
# Behavior : Analyze for logical consistency and accuracy; provide detailed
# natural language critiques; highlight errors and improvements
# Output : Text enclosed in <critic>...</critic> tags
#
# <summarizer> (L28-34)
# Objective : Synthesize validated propositions into a coherent chain-of-thought
# Behavior : Review the DAG of propositions and critiques; extract valid steps;
# determine completeness; present the final answer
# Output : Text enclosed in <summarizer>...</summarizer> tags
# Process Flow (L37-41)
#
# 1. <proposer> -- presents one or more reasoning steps
# 2. <critic> -- analyzes steps, provides critiques and refinements
# 3. <summarizer> -- reviews validated propositions, checks completeness
# 4. Repeat -- cycle continues until <summarizer> confirms reasoning complete
# Formatting Constraints (L43-48)
#
# - Clarity : each step and critique must be easy to understand
# - Logical progression : propositions follow logically from predecessors
# - Tags : output always enclosed in correct XML tags
# - Natural language : critiques use detailed explanations for meaningful feedback
Import
# Load the DoT system prompt
with open("prompts/iterative-reasoner.md", "r") as f:
dot_system_prompt = f.read()
# Use with any LLM API
# OpenAI example:
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "system", "content": dot_system_prompt}, ...]
)
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| prompt_file | str | Yes | Path to prompts/iterative-reasoner.md (full mode) or inline minimal template from README.md:L63-97 (minimal mode)
|
| mode | str | No | "full" (67-line detailed prompt with role descriptions, formatting guidelines, and example interaction) or "minimal" (34-line concise template with typed serialization records). Default: "full"
|
| llm_endpoint | str | Yes | Target LLM API endpoint (OpenAI, Anthropic, open-source inference servers, etc.) |
Outputs
| Name | Type | Description |
|---|---|---|
| configured_session | LLM Session | LLM session with DoT system prompt loaded, ready to accept <problem> input and begin the iterative propose-critique-summarize cycle
|
Usage Examples
Full Mode: Loading iterative-reasoner.md with OpenAI API
from openai import OpenAI
client = OpenAI()
# Load the full 67-line DoT system prompt
with open("prompts/iterative-reasoner.md", "r") as f:
dot_system_prompt = f.read()
# Initialize a DoT reasoning session
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": dot_system_prompt},
{"role": "user", "content": "How many times does the letter 'r' appear in 'strawberry'?"}
]
)
# The model will respond with <proposer>...</proposer>, <critic>...</critic>, etc.
print(response.choices[0].message.content)
Minimal Mode: Inline Template with Anthropic API
import anthropic
client = anthropic.Anthropic()
# Minimal DoT template (from README.md:L63-97) with typed records
minimal_dot_prompt = """You are a single model that performs Diagram-of-Thought (DoT) reasoning.
Your goal is to build a graph of reasoning steps to solve the problem.
You will use the following roles: <problem>, <proposer>, <critic>, and <summarizer>.
When possible, interleave typed records for auditability:
@node id=<n> role={problem|proposer|critic|summarizer}
@edge src=<i> dst=<n> kind={use|critique|refine} (must have i < n)
@status target=<i> mark={validated|invalidated}
"""
# Initialize a DoT reasoning session with the minimal template
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=4096,
system=minimal_dot_prompt,
messages=[
{"role": "user", "content": "Compare the numbers 9.11 and 9.8. Which is larger?"}
]
)
# The model will respond with role-tagged reasoning and @node/@edge/@status records
print(response.content[0].text)