Implementation:Princeton nlp Tree of thought llm Solve BFS
| Knowledge Sources | |
|---|---|
| Domains | Search_Algorithms, LLM_Reasoning, NLP |
| Last Updated | 2026-02-14 03:30 GMT |
Overview
Concrete tool for running breadth-first tree search over LLM-generated thoughts provided by the Tree of Thoughts framework.
Description
The solve function implements the core BFS loop of the Tree of Thoughts algorithm. It takes experiment configuration, a task object, and a puzzle index, then iteratively generates thought candidates, evaluates them, and selects the best ones across a fixed number of depth steps. The function monkey-patches the global gpt function with the configured model and temperature using functools.partial before entering the search loop.
Usage
Import and call this function when running a full Tree of Thoughts BFS experiment on any supported task. It is the main entry point for ToT search, called from the run() function in run.py when naive_run is False.
Code Reference
Source Location
- Repository: tree-of-thought-llm
- File: src/tot/methods/bfs.py
- Lines: 49-88
Signature
def solve(args, task, idx, to_print=True):
"""
Run BFS tree search for a single puzzle instance.
Args:
args: argparse.Namespace with fields:
- backend (str): LLM model name (e.g., 'gpt-4')
- temperature (float): sampling temperature
- method_generate (str): 'sample' or 'propose'
- method_evaluate (str): 'value' or 'vote'
- method_select (str): 'sample' or 'greedy'
- prompt_sample (str): 'standard' or 'cot' (if method_generate='sample')
- n_generate_sample (int): number of generation samples
- n_evaluate_sample (int): number of evaluation samples
- n_select_sample (int): number of candidates to keep per step
task: Task object with get_input(), prompt wraps, and steps/stops
idx (int): puzzle index in task dataset
to_print (bool): whether to print intermediate results
Returns:
tuple: (ys, info)
- ys (list[str]): final candidate solutions
- info (dict): {'steps': list[dict]} with per-step logs
"""
Import
from tot.methods.bfs import solve
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| args | argparse.Namespace | Yes | Experiment configuration with backend, temperature, method_generate, method_evaluate, method_select, n_generate_sample, n_evaluate_sample, n_select_sample |
| task | Task | Yes | Instantiated task object with get_input(), prompt wraps, steps, stops, value_cache |
| idx | int | Yes | Index of the puzzle in the task dataset |
| to_print | bool | No | Whether to print intermediate results (default True) |
Outputs
| Name | Type | Description |
|---|---|---|
| ys | list[str] | Final candidate solution strings after BFS search |
| info | dict | Contains 'steps' key mapping to list of per-step dicts with 'step', 'x', 'ys', 'new_ys', 'values', 'select_new_ys' |
Usage Examples
Running BFS on Game of 24
import argparse
from tot.tasks import get_task
from tot.methods.bfs import solve
# 1. Create experiment configuration
args = argparse.Namespace(
backend='gpt-4',
temperature=0.7,
method_generate='propose',
method_evaluate='value',
method_select='greedy',
prompt_sample=None,
n_generate_sample=1,
n_evaluate_sample=3,
n_select_sample=5,
)
# 2. Instantiate task
task = get_task('game24')
# 3. Run BFS solve on puzzle index 900
ys, info = solve(args, task, 900)
# 4. ys contains the top candidate solutions
# info['steps'] contains per-step generation/evaluation logs
print(f"Solutions: {ys}")
print(f"Steps logged: {len(info['steps'])}")
Running BFS on Creative Writing
args = argparse.Namespace(
backend='gpt-4',
temperature=1.0,
method_generate='sample',
method_evaluate='vote',
method_select='greedy',
prompt_sample='cot',
n_generate_sample=5,
n_evaluate_sample=5,
n_select_sample=5,
)
task = get_task('text')
ys, info = solve(args, task, 0)