Environment:Diagram of thought Diagram of thought LLM API
| Knowledge Sources | |
|---|---|
| Domains | Infrastructure, LLMs |
| Last Updated | 2026-02-14 04:30 GMT |
Overview
LLM API environment providing access to a capable large language model (GPT-4, Claude, or equivalent) with system prompt and structured output support.
Description
This environment defines the external API dependencies required to run DoT (Diagram of Thought) reasoning workflows. The core requirement is access to a large language model API that supports system message injection and XML tag following in autoregressive generation. Compatible providers include OpenAI (ChatCompletion API), Anthropic (Messages API), or any equivalent LLM endpoint. The model must be capable of following structured prompts with role alternation between proposer, critic, and summarizer XML tags.
Usage
Use this environment for any workflow step that involves LLM interaction: loading system prompts, executing the iterative propose-critique cycle, capturing raw DoT output, and validating format compliance. It is the mandatory prerequisite for running the Iterative_Reasoner_Prompt_Loading, LLM_Response_Capture, and Format_Compliance_Validator implementations.
System Requirements
| Category | Requirement | Notes |
|---|---|---|
| OS | Any (Linux, macOS, Windows) | No OS-specific constraints |
| Network | Internet access | Required for API calls to cloud LLM providers |
| Hardware | Standard CPU | No GPU required (inference runs on remote API) |
Dependencies
System Packages
- No OS-level system packages required
Python Packages
- `openai` >= 1.0.0 (for OpenAI ChatCompletion API)
- `anthropic` >= 0.18.0 (for Anthropic Messages API)
- `requests` >= 2.28.0 (fallback for generic HTTP-based LLM endpoints)
Note: Only one LLM client library is required, depending on the chosen provider.
Credentials
The following environment variables must be set depending on the LLM provider:
- `OPENAI_API_KEY`: OpenAI API key (if using GPT-4 or compatible models)
- `ANTHROPIC_API_KEY`: Anthropic API key (if using Claude models)
Warning: Never commit actual API keys to source control. Use `.env` files or secret management systems.
Quick Install
# For OpenAI provider
pip install openai>=1.0.0
# For Anthropic provider
pip install anthropic>=0.18.0
# For generic HTTP provider
pip install requests>=2.28.0
Code Evidence
LLM API usage from the system prompt loading pattern described in `README.md:L59-61`:
You can guide any capable LLM to perform DoT reasoning with the following
prompt structure. The model learns to alternate between roles to construct
the reasoning graph.
Interactive sandbox reference from `README.md:L41-43`:
* **Interactive Sandbox (ChatGPT):** Try the DoT process yourself in this interactive GPT.
* https://chatgpt.com/g/g-oPWt6oqF0-iterative-reasoner
Process flow from `prompts/iterative-reasoner.md:L37-41` defining the LLM interaction loop:
1. **Iteration Begins**: The `<proposer>` presents one or more reasoning steps.
2. **Critical Evaluation**: The `<critic>` analyzes these steps.
3. **Assessment and Synthesis**: The `<summarizer>` reviews the validated propositions.
4. **Repeat**: This cycle continues until the `<summarizer>` confirms reasoning is complete.
Common Errors
| Error Message | Cause | Solution |
|---|---|---|
| `openai.AuthenticationError` | Invalid or missing API key | Set `OPENAI_API_KEY` environment variable with a valid key |
| `anthropic.AuthenticationError` | Invalid or missing API key | Set `ANTHROPIC_API_KEY` environment variable with a valid key |
| `RateLimitError` | Too many API requests | Implement exponential backoff or reduce request frequency |
| Model does not follow XML tags | Model not capable enough | Use a more capable model (GPT-4, Claude 3.5+); less capable models may not reliably alternate between `<proposer>`, `<critic>`, `<summarizer>` roles |
Compatibility Notes
- OpenAI: Use `gpt-4` or `gpt-4o` models. The `gpt-3.5-turbo` model may not reliably follow the structured XML tag alternation required by DoT.
- Anthropic: Use Claude 3.5 Sonnet or Claude 3 Opus for best results with structured reasoning.
- Local Models: Self-hosted models (e.g., via vLLM or Ollama) may work if they support system prompts and can follow structured XML output formats. Results vary by model capability.
- Streaming: If using streaming responses, ensure the full response is captured before parsing typed records (`@node`, `@edge`, `@status`).