Implementation:Spcl Graph of thoughts ChatGPT
Appearance
| Knowledge Sources | |
|---|---|
| Domains | LLM_Integration, API_Design |
| Principles | Principle:Spcl_Graph_of_thoughts_OpenAI_Chat_Integration |
| Source File | graph_of_thoughts/language_models/chatgpt.py, Lines 20-157
|
| Last Updated | 2026-02-14 |
Overview
The ChatGPT class provides a concrete implementation of AbstractLanguageModel that connects to OpenAI's Chat Completion API. It handles configuration loading, API client initialization, exponential backoff on errors, token cost tracking, response caching, and multi-response querying.
Import
from graph_of_thoughts.language_models import ChatGPT
Class Signature
class ChatGPT(AbstractLanguageModel):
def __init__(
self, config_path: str = "", model_name: str = "chatgpt", cache: bool = False
) -> None: ...
def query(
self, query: str, num_responses: int = 1
) -> Union[List[ChatCompletion], ChatCompletion]: ...
def chat(
self, messages: List[Dict], num_responses: int = 1
) -> ChatCompletion: ...
def get_response_texts(
self, query_response: Union[List[ChatCompletion], ChatCompletion]
) -> List[str]: ...
External Dependencies
- openai -- OpenAI Python client library (provides
OpenAI,OpenAIError,ChatCompletion) - backoff -- retry library for exponential backoff on exceptions
Configuration Parameters
| Parameter | Type | Description |
|---|---|---|
model_id |
str |
OpenAI model identifier (e.g., gpt-4, gpt-3.5-turbo)
|
prompt_token_cost |
float |
Cost per 1000 prompt tokens |
response_token_cost |
float |
Cost per 1000 completion tokens |
temperature |
float |
Randomness of model output (0.0 = deterministic, higher = more random) |
max_tokens |
int |
Maximum tokens to generate per completion |
stop |
Union[str, List[str]] |
Stop sequence(s) that terminate generation |
organization |
str |
OpenAI organization identifier |
api_key |
str |
API key (overridden by OPENAI_API_KEY env var if set)
|
I/O Behavior
Input: A JSON configuration file path containing model settings and API credentials.
Output: An initialized ChatGPT instance with an OpenAI client ready to accept queries, tracking prompt/completion tokens and cumulative cost.
query()
- Input: A string query and optional
num_responsescount. - Output: A single
ChatCompletion(whennum_responses=1) or a list ofChatCompletionobjects. - Caching: If caching is enabled, returns cached response for previously seen queries.
- Multi-response handling: When requesting multiple responses, attempts batch requests via the OpenAI
nparameter. On failure, halves the batch size and retries until all responses are collected or attempts are exhausted.
chat()
- Input: A list of message dictionaries (
[{"role": "user", "content": "..."}]) andnum_responses. - Output: A single
ChatCompletionobject. - Error handling: Decorated with
@backoff.on_exception(backoff.expo, OpenAIError, max_time=10, max_tries=6). - Side effects: Updates
self.prompt_tokens,self.completion_tokens, andself.costafter each call.
get_response_texts()
- Input: A
ChatCompletionor list ofChatCompletionobjects. - Output: A flat list of response text strings, extracted from
choice.message.contentacross all responses and choices.
Key Implementation Details
- The API key is resolved with a fallback chain:
os.getenv("OPENAI_API_KEY")takes priority, falling back to the config file value. If neither is set, aValueErroris raised. - The
organizationfield logs a warning if empty but does not raise an error. - Cost is computed as:
(prompt_tokens / 1000) * prompt_token_cost + (completion_tokens / 1000) * response_token_cost. - The multi-response retry loop in
query()sleeps for a random 1-3 seconds between attempts to avoid thundering herd effects.
Usage Example
from graph_of_thoughts.language_models import ChatGPT
# Initialize with default config
lm = ChatGPT(config_path="config.json", model_name="chatgpt", cache=True)
# Single query
response = lm.query("Sort this list: [3, 1, 2]")
texts = lm.get_response_texts(response)
print(texts) # ["[1, 2, 3]"]
# Multi-response query
responses = lm.query("Generate 3 different sortings:", num_responses=3)
all_texts = lm.get_response_texts(responses)
# Check cost
print(f"Total cost: ${lm.cost:.4f}")
Related Pages
- Principle:Spcl_Graph_of_thoughts_OpenAI_Chat_Integration
- Environment:Spcl_Graph_of_thoughts_Python_3_8_Runtime
- Environment:Spcl_Graph_of_thoughts_OpenAI_API_Access
- Heuristic:Spcl_Graph_of_thoughts_Backoff_Retry_On_API_Errors
GitHub URL
graph_of_thoughts/language_models/chatgpt.py (Lines 20-157)
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment