Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Spcl Graph of thoughts LLM Prompt Generation

From Leeroopedia
Knowledge Sources
Domains Prompt_Engineering, LLM_Orchestration
Last Updated 2026-02-14 12:00 GMT

Overview

Abstract interface pattern that defines how to generate operation-specific prompts for language models in a graph-based reasoning framework.

Description

The Prompter abstraction decouples prompt generation from reasoning execution. Each operation type in the Graph of Operations (Generate, Aggregate, Improve, Validate, Score) requires a different style of prompt. The Prompter defines 5 abstract methods matching these operation types:

  • generate_prompt: Creates prompts that ask the LLM to solve or transform the current state
  • aggregation_prompt: Creates prompts that ask the LLM to merge multiple thought states
  • improve_prompt: Creates prompts that ask the LLM to refine a thought
  • validation_prompt: Creates prompts that ask the LLM to check if a thought is valid
  • score_prompt: Creates prompts that ask the LLM to evaluate thought quality

Each concrete Prompter implementation encodes domain-specific knowledge as prompt templates. The thought state dict is unpacked as kwargs, allowing implementations to specify required arguments explicitly.

Usage

Use this principle when implementing a new problem domain for the GoT framework. Create a concrete Prompter subclass that defines prompt templates for your specific task (sorting, keyword counting, document merging, etc.). The Prompter is the primary place where domain expertise is encoded.

Theoretical Basis

The Strategy Pattern applied to LLM prompt engineering. Rather than hard-coding prompts in the execution engine, the Prompter interface allows hot-swapping prompt strategies. This separation enables:

  • Reusability -- the same controller and graph of operations can be applied to many different problem domains simply by swapping the Prompter
  • Testability -- prompt generation can be unit-tested independently of LLM calls
  • Composability -- different prompt strategies (zero-shot, few-shot, chain-of-thought) can coexist as sibling implementations

Pseudo-code:

# Abstract algorithm (NOT real implementation)
for each operation in graph:
    if operation.type == GENERATE:
        prompt = prompter.generate_prompt(num_branches, **thought.state)
    elif operation.type == AGGREGATE:
        prompt = prompter.aggregation_prompt(predecessor_states)
    elif operation.type == SCORE:
        prompt = prompter.score_prompt(states_to_score)
    # ... etc
    response = lm.query(prompt)
    new_state = parser.parse(response)

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment