Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Spcl Graph of thoughts Language Model Abstraction

From Leeroopedia
Knowledge Sources
Domains LLM_Orchestration, Software_Architecture
Last Updated 2026-02-14 12:00 GMT

Overview

Abstract interface pattern that encapsulates all interaction with language models behind a uniform two-method interface: query and get_response_texts. This allows the Graph of Thoughts framework to operate independently of any specific LLM provider.

Description

The AbstractLanguageModel abstraction provides a provider-agnostic layer for interacting with language models. The core insight is that, regardless of the underlying LLM (OpenAI GPT-4, HuggingFace LLaMA, etc.), the framework only needs two operations:

  1. query: Send a prompt string to the model and receive a raw response object
  2. get_response_texts: Extract a list of plain text strings from that response object

By abstracting these two operations, the framework achieves complete decoupling between reasoning logic (the controller and graph of operations) and the specifics of any particular LLM API.

Beyond the abstract interface, the base class also provides concrete infrastructure:

  • Configuration loading -- reads model parameters (temperature, max tokens, API keys, cost rates) from a JSON config file
  • Cost tracking -- maintains running totals of prompt_tokens, completion_tokens, and cost across all queries
  • Response caching -- optionally caches responses keyed by prompt string, avoiding redundant API calls during experimentation

Usage

Use this principle when adding support for a new LLM provider to the GoT framework. Create a concrete subclass of AbstractLanguageModel that implements query and get_response_texts for your provider's API. The base class handles config loading, caching, and cost tracking automatically.

Theoretical Basis

The Adapter Pattern (also known as the Wrapper Pattern) applied to LLM APIs. Each concrete language model class adapts a vendor-specific API (OpenAI, HuggingFace Transformers, etc.) into the uniform interface expected by the GoT controller.

This separation enables:

  • Provider independence -- swap between GPT-4 and LLaMA without changing any controller, prompter, or parser code
  • Cost management -- centralized token and cost tracking across all providers
  • Reproducibility -- response caching allows deterministic re-runs of experiments
  • Testability -- the abstract interface can be mocked for unit testing without making real API calls

Pseudo-code:

# Abstract algorithm (NOT real implementation)
# The controller only depends on the abstract interface:
lm = AbstractLanguageModel(config_path="config.json", model_name="chatgpt")

for each operation in graph:
    prompt = prompter.create_prompt(operation, thought_states)
    raw_response = lm.query(prompt, num_responses=operation.num_branches)
    texts = lm.get_response_texts(raw_response)
    new_states = parser.parse(operation.type, states, texts)

# After execution, inspect costs:
print(f"Total cost: ${lm.cost:.4f}")
print(f"Prompt tokens: {lm.prompt_tokens}, Completion tokens: {lm.completion_tokens}")

Related Pages

Implemented By

Related Principles

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment