Environment:Princeton nlp Tree of thought llm Python OpenAI
| Knowledge Sources | |
|---|---|
| Domains | Infrastructure, NLP, LLM_Reasoning |
| Last Updated | 2026-02-14 04:00 GMT |
Overview
Python 3.7+ environment with openai==0.27.7, backoff, numpy, sympy, and pandas for running Tree of Thoughts experiments against the OpenAI ChatCompletion API.
Description
This environment provides the full runtime context for the Tree of Thoughts framework. It is a CPU-only Python environment (no GPU required) that relies on the OpenAI API for all LLM inference. The stack centers on openai==0.27.7 (the legacy v0 SDK using openai.ChatCompletion.create), backoff for exponential retry, and scientific libraries (numpy, sympy, pandas) for task data loading and mathematical validation. All experiments communicate with OpenAI's hosted models (GPT-4, GPT-3.5-turbo, GPT-4o) via API key.
Usage
Use this environment for any workflow in the Tree of Thoughts framework: running ToT+BFS experiments, baseline IO/CoT sampling, and adding new tasks. Every Implementation page in this repository requires this environment because all code paths ultimately call the gpt() function in src/tot/models.py, which depends on the OpenAI SDK and API key.
System Requirements
| Category | Requirement | Notes |
|---|---|---|
| OS | Any (Linux, macOS, Windows) | No platform-specific dependencies |
| Hardware | CPU only | All inference is remote via OpenAI API; no GPU needed |
| Network | Internet access required | Must reach api.openai.com (or custom OPENAI_API_BASE) |
| Disk | ~50MB | For package installation and dataset files |
Dependencies
System Packages
- Python >= 3.7 (3.7 through 3.11 officially supported)
- `pip` (for package installation)
- `git` (for source installation)
Python Packages
- `openai` == 0.27.7
- `backoff` == 2.2.1
- `numpy` == 1.24.3
- `sympy` == 1.12
- `pandas` == 2.0.3
- `tqdm` == 4.65.0
- `aiohttp` == 3.8.4
- `requests` == 2.31.0
Credentials
The following environment variables must be set:
- `OPENAI_API_KEY`: OpenAI API key with access to ChatCompletion models (GPT-4, GPT-3.5-turbo, GPT-4o). Required.
- `OPENAI_API_BASE`: (Optional) Custom API base URL for proxy or self-hosted endpoints.
Quick Install
# Option 1: Install from PyPI
pip install tree-of-thoughts-llm
# Option 2: Install from source
git clone https://github.com/princeton-nlp/tree-of-thought-llm
cd tree-of-thought-llm
pip install -r requirements.txt
pip install -e .
# Set API key
export OPENAI_API_KEY="your-key-here"
Code Evidence
API key loading from `src/tot/models.py:7-16`:
api_key = os.getenv("OPENAI_API_KEY", "")
if api_key != "":
openai.api_key = api_key
else:
print("Warning: OPENAI_API_KEY is not set")
api_base = os.getenv("OPENAI_API_BASE", "")
if api_base != "":
print("Warning: OPENAI_API_BASE is set to {}".format(api_base))
openai.api_base = api_base
OpenAI SDK usage from `src/tot/models.py:18-20`:
@backoff.on_exception(backoff.expo, openai.error.OpenAIError)
def completions_with_backoff(**kwargs):
return openai.ChatCompletion.create(**kwargs)
Python version constraint from `pyproject.toml:10`:
requires-python = ">= 3.7"
Common Errors
| Error Message | Cause | Solution |
|---|---|---|
| `Warning: OPENAI_API_KEY is not set` | OPENAI_API_KEY environment variable not configured | `export OPENAI_API_KEY="sk-..."` before running |
| `openai.error.AuthenticationError` | Invalid or expired API key | Verify API key at https://platform.openai.com/api-keys |
| `openai.error.RateLimitError` | Too many concurrent requests | The backoff decorator handles this automatically; reduce `n_generate_sample` if persistent |
| `ModuleNotFoundError: No module named 'openai'` | openai package not installed | `pip install openai==0.27.7` |
| `AttributeError: module 'openai' has no attribute 'ChatCompletion'` | openai>=1.0 installed (breaking API change) | `pip install openai==0.27.7` (this repo requires the v0 SDK) |
Compatibility Notes
- openai SDK version: This repository uses openai==0.27.7 (the v0 legacy SDK). The v1.0+ SDK introduced breaking changes (`openai.ChatCompletion.create` was removed). Do not upgrade to openai>=1.0 without refactoring `src/tot/models.py`.
- Python 3.12+: Not listed in official classifiers. May work but is untested.
- Custom API base: Setting `OPENAI_API_BASE` allows using compatible endpoints (Azure OpenAI, local proxies), but response format must match the v0 SDK expectations.
Related Pages
- Implementation:Princeton_nlp_Tree_of_thought_llm_Solve_BFS
- Implementation:Princeton_nlp_Tree_of_thought_llm_Get_Task
- Implementation:Princeton_nlp_Tree_of_thought_llm_Gpt_Completion
- Implementation:Princeton_nlp_Tree_of_thought_llm_Parse_Args
- Implementation:Princeton_nlp_Tree_of_thought_llm_Get_Proposals
- Implementation:Princeton_nlp_Tree_of_thought_llm_Get_Values
- Implementation:Princeton_nlp_Tree_of_thought_llm_Test_Output
- Implementation:Princeton_nlp_Tree_of_thought_llm_Gpt_Usage
- Implementation:Princeton_nlp_Tree_of_thought_llm_Naive_Solve
- Implementation:Princeton_nlp_Tree_of_thought_llm_Task_Base_Class
- Implementation:Princeton_nlp_Tree_of_thought_llm_Prompt_Templates
- Implementation:Princeton_nlp_Tree_of_thought_llm_Get_Task_Factory