Implementation:Spcl Graph of thoughts AbstractLanguageModel
Appearance
| Knowledge Sources | |
|---|---|
| Source File | graph_of_thoughts/language_models/abstract_language_model.py, Lines L16-92 |
| Import | from graph_of_thoughts.language_models import AbstractLanguageModel
|
| Domains | LLM_Orchestration, Software_Architecture |
| Last Updated | 2026-02-14 12:00 GMT |
Overview
The AbstractLanguageModel class is the abstract base class (ABC) that defines the interface for all language model integrations in the Graph of Thoughts framework. It provides concrete infrastructure for configuration loading, response caching, and cost tracking, while delegating the actual LLM interaction to two abstract methods: query and get_response_texts.
Interface
Class Definition and Constructor
from abc import ABC, abstractmethod
from typing import List, Dict, Union, Any
import json
import os
import logging
class AbstractLanguageModel(ABC):
"""
Abstract base class that defines the interface for all language models.
"""
def __init__(
self, config_path: str = "", model_name: str = "", cache: bool = False
) -> None:
"""
Initialize the AbstractLanguageModel instance with configuration,
model details, and caching options.
:param config_path: Path to the config file. Defaults to "".
:type config_path: str
:param model_name: Name of the language model. Defaults to "".
:type model_name: str
:param cache: Flag to determine whether to cache responses. Defaults to False.
:type cache: bool
"""
self.logger = logging.getLogger(self.__class__.__name__)
self.config: Dict = None
self.model_name: str = model_name
self.cache = cache
if self.cache:
self.response_cache: Dict[str, List[Any]] = {}
self.load_config(config_path)
self.prompt_tokens: int = 0
self.completion_tokens: int = 0
self.cost: float = 0.0
Concrete Methods
load_config
def load_config(self, path: str) -> None:
"""
Load configuration from a specified path.
:param path: Path to the config file. If an empty path provided,
default is config.json in the current directory.
:type path: str
"""
if path == "":
current_dir = os.path.dirname(os.path.abspath(__file__))
path = os.path.join(current_dir, "config.json")
with open(path, "r") as f:
self.config = json.load(f)
clear_cache
def clear_cache(self) -> None:
"""
Clear the response cache.
"""
self.response_cache.clear()
Abstract Methods
query
@abstractmethod
def query(self, query: str, num_responses: int = 1) -> Any:
"""
Abstract method to query the language model.
:param query: The query to be posed to the language model.
:type query: str
:param num_responses: The number of desired responses.
:type num_responses: int
:return: The language model's response(s).
:rtype: Any
"""
pass
get_response_texts
@abstractmethod
def get_response_texts(self, query_responses: Union[List[Any], Any]) -> List[str]:
"""
Abstract method to extract response texts from the language model's response(s).
:param query_responses: The responses returned from the language model.
:type query_responses: Union[List[Any], Any]
:return: List of textual responses.
:rtype: List[str]
"""
pass
Input / Output
Constructor
| Parameter | Type | Default | Description |
|---|---|---|---|
| config_path | str |
"" |
Path to JSON config file; if empty, defaults to config.json in the language_models package directory
|
| model_name | str |
"" |
Name of the model, used to select the correct section from the config |
| cache | bool |
False |
Whether to enable response caching |
Instance Attributes After Initialization
| Attribute | Type | Description |
|---|---|---|
| config | Dict |
Parsed JSON configuration dictionary |
| model_name | str |
Name of the language model |
| cache | bool |
Whether caching is enabled |
| response_cache | Dict[str, List[Any]] |
Cache mapping prompt strings to response lists (only created if cache=True)
|
| prompt_tokens | int |
Running total of prompt tokens consumed |
| completion_tokens | int |
Running total of completion tokens consumed |
| cost | float |
Running total cost in dollars |
| logger | logging.Logger |
Logger instance named after the concrete subclass |
Abstract Method I/O
| Method | Input | Output |
|---|---|---|
| query | query: str -- prompt text; num_responses: int -- number of desired responses (default 1) |
Any -- raw response object from the LLM provider
|
| get_response_texts | query_responses: Union[List[Any], Any] -- raw response(s) from query |
List[str] -- extracted text strings
|
Design Notes
- The constructor calls
self.load_config(config_path)during initialization. Concrete subclasses typically read their specific section fromself.config(e.g.,self.config[model_name]) in their own__init__. - Cost tracking attributes (
prompt_tokens,completion_tokens,cost) are initialized to zero. Concrete subclasses are responsible for incrementing these values in theirqueryimplementation. - The
response_cacheattribute is only created whencache=True. Concrete subclasses should checkself.cachebefore attempting to read from or write to the cache. - The
querymethod returnsAnybecause different providers return different response types (e.g., OpenAI returnsChatCompletionobjects). Theget_response_textsmethod normalizes these intoList[str].
Example: Subclassing AbstractLanguageModel
from graph_of_thoughts.language_models import AbstractLanguageModel
from typing import List, Union, Any
from openai import OpenAI
class ChatGPT(AbstractLanguageModel):
"""Concrete LM implementation for OpenAI ChatGPT models."""
def __init__(
self, config_path: str = "", model_name: str = "chatgpt", cache: bool = False
) -> None:
super().__init__(config_path, model_name, cache)
self.config = self.config[model_name]
self.model_id = self.config["model_id"]
self.prompt_token_cost = self.config["prompt_token_cost"]
self.response_token_cost = self.config["response_token_cost"]
self.temperature = self.config["temperature"]
self.max_tokens = self.config["max_tokens"]
self.client = OpenAI()
def query(self, query: str, num_responses: int = 1) -> Any:
# Send prompt to OpenAI API, update cost tracking
response = self.client.chat.completions.create(
model=self.model_id,
messages=[{"role": "user", "content": query}],
temperature=self.temperature,
max_tokens=self.max_tokens,
n=num_responses,
)
# Update token counts and cost
self.prompt_tokens += response.usage.prompt_tokens
self.completion_tokens += response.usage.completion_tokens
self.cost += (
response.usage.prompt_tokens * self.prompt_token_cost
+ response.usage.completion_tokens * self.response_token_cost
) / 1000.0
return response
def get_response_texts(self, query_responses: Union[List[Any], Any]) -> List[str]:
return [choice.message.content for choice in query_responses.choices]
Related Pages
Implements
Concrete Implementations
- ChatGPT in
graph_of_thoughts/language_models/chatgpt.py - Llama2HF in
graph_of_thoughts/language_models/llamachat_hf.py
Related Implementations
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment