Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Diagram of thought Diagram of thought LLM Response Capture

From Leeroopedia

Template:Implementation

Overview

LLM Response Capture is a concrete pattern for capturing the full LLM response text containing Diagram of Thought reasoning traces with XML tags (<proposer>, <critic>, <summarizer>) and typed records (@node, @edge, @status).

Description

Capture the complete text output from the LLM session. The output contains interleaved XML-tagged role blocks with embedded typed records that encode the reasoning DAG. The capture must preserve the entire response without truncation or modification, as every XML tag and typed record is structurally significant. This pattern is the concrete realization of the Raw Output Collection principle.

Usage

Apply this pattern after any DoT reasoning session. Specifically:

  • After sending a DoT system prompt and user problem to an LLM and receiving the complete response.
  • When the response must be stored verbatim for downstream parsing, graph extraction, or audit purposes.
  • Regardless of which LLM provider or client library is used -- the capture logic adapts to each API but the output semantics remain the same.

Code Reference

Source

  • prompts/iterative-reasoner.md:L1-67 -- Defines the DoT prompt structure including XML role tags and the expected output format.
  • README.md:L110-114 -- Describes the operational view: the model generates a single stream of text containing interleaved role tokens.

Signature

def capture_dot_output(
    llm_client,
    dot_system_prompt: str,
    problem_text: str,
    **generation_kwargs
) -> str:
    """
    Send a DoT reasoning prompt to an LLM and capture the full response.

    Args:
        llm_client: An initialized LLM client instance (provider-specific).
        dot_system_prompt: The Diagram of Thought system prompt defining
                          proposer/critic/summarizer roles and typed records.
        problem_text: The user problem or task to reason about.
        **generation_kwargs: Additional generation parameters (e.g., temperature,
                            max_tokens) passed through to the LLM API.

    Returns:
        The complete raw output string containing all XML role blocks
        and typed records (@node, @edge, @status).
    """
    ...

Import

This pattern depends on the LLM client library used for inference. Common choices include:

  • openai -- for OpenAI API access
  • anthropic -- for Anthropic API access
  • Any library that returns the complete text response from a chat completion endpoint

I/O Contract

Direction Type Description
Input str (LLM-generated DoT reasoning output) The complete response text produced by the LLM after processing the DoT system prompt and user problem. This is the raw text as returned by the LLM API.
Output str (raw text with all XML tags and typed records) A single string preserving the full interleaved structure: <proposer>..@node..</proposer>, <critic>..@edge..@status..</critic>, <summarizer>...</summarizer> blocks in their original order.

Invariants:

  • The output string must be identical to the LLM response content -- no stripping, trimming, or post-processing.
  • All XML role tags (<proposer>, <critic>, <summarizer>) and typed records (@node, @edge, @status) must be present and intact.

Usage Examples

The following examples demonstrate capturing the raw DoT output using two common LLM client libraries.

OpenAI API

# OpenAI
from openai import OpenAI

client = OpenAI()

response = client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": dot_system_prompt},
        {"role": "user", "content": problem_text}
    ]
)
raw_output = response.choices[0].message.content

Anthropic API

# Anthropic
import anthropic

client = anthropic.Anthropic()

response = client.messages.create(
    model="claude-3-opus-20240229",
    system=dot_system_prompt,
    messages=[{"role": "user", "content": problem_text}]
)
raw_output = response.content[0].text

Verifying Capture Completeness

# Quick sanity check that the raw output contains expected structural markers
assert "<proposer>" in raw_output, "Missing proposer blocks"
assert "<critic>" in raw_output, "Missing critic blocks"
assert "<summarizer>" in raw_output, "Missing summarizer blocks"
assert "@node" in raw_output, "Missing @node typed records"

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment