Implementation:Princeton nlp Tree of thought llm Parse Args
| Knowledge Sources | |
|---|---|
| Domains | Experiment_Management, CLI_Design |
| Last Updated | 2026-02-14 03:30 GMT |
Overview
Concrete tool for parsing command-line experiment arguments provided by the Tree of Thoughts CLI entry point.
Description
The parse_args function defines and parses all CLI arguments for running ToT experiments using Python's argparse module. It produces an argparse.Namespace object containing all hyperparameters needed by the experiment loop, including LLM backend selection, task choice, search method configuration, and baseline mode flags.
Usage
Called once at program startup in run.py __main__ block. The returned namespace is passed directly to run() which uses it for task instantiation, file path construction, and solver invocation.
Code Reference
Source Location
- Repository: tree-of-thought-llm
- File: run.py
- Lines: 43-63
Signature
def parse_args():
"""
Parse command-line arguments for ToT experiments.
Returns:
argparse.Namespace: Parsed arguments with fields:
- backend (str): 'gpt-4', 'gpt-3.5-turbo', or 'gpt-4o'
- temperature (float): sampling temperature (default 0.7)
- task (str): 'game24', 'text', or 'crosswords' (required)
- task_start_index (int): start puzzle index (default 900)
- task_end_index (int): end puzzle index (default 1000)
- naive_run (bool): if True, run baseline instead of ToT
- prompt_sample (str): 'standard' or 'cot' (for baseline/sample)
- method_generate (str): 'sample' or 'propose'
- method_evaluate (str): 'value' or 'vote'
- method_select (str): 'sample' or 'greedy' (default 'greedy')
- n_generate_sample (int): generation samples (default 1)
- n_evaluate_sample (int): evaluation samples (default 1)
- n_select_sample (int): candidates to keep (default 1)
"""
Import
# parse_args is defined in run.py (not in a package module)
# Typically called directly from __main__
from run import parse_args # if run.py is on PYTHONPATH
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| sys.argv | list[str] | Yes | Command-line arguments passed to the script |
Outputs
| Name | Type | Description |
|---|---|---|
| return | argparse.Namespace | Namespace with all experiment hyperparameters as attributes |
Usage Examples
ToT BFS Experiment (Game of 24)
# Equivalent to: python run.py --task game24 --method_generate propose \
# --method_evaluate value --method_select greedy \
# --n_generate_sample 1 --n_evaluate_sample 3 --n_select_sample 5
# The parse_args() call inside run.py will produce:
# Namespace(backend='gpt-4', temperature=0.7, task='game24',
# task_start_index=900, task_end_index=1000,
# naive_run=False, prompt_sample=None,
# method_generate='propose', method_evaluate='value',
# method_select='greedy', n_generate_sample=1,
# n_evaluate_sample=3, n_select_sample=5)
Baseline IO Experiment
# Equivalent to: python run.py --task game24 --naive_run \
# --prompt_sample standard --n_generate_sample 100
# parse_args() produces:
# Namespace(backend='gpt-4', temperature=0.7, task='game24',
# task_start_index=900, task_end_index=1000,
# naive_run=True, prompt_sample='standard',
# n_generate_sample=100, ...)