Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Princeton nlp Tree of thought llm Naive Solve

From Leeroopedia
Knowledge Sources
Domains LLM_Reasoning, Evaluation, NLP
Last Updated 2026-02-14 03:30 GMT

Overview

Concrete tool for running naive IO or Chain-of-Thought baseline sampling provided by the Tree of Thoughts BFS module.

Description

The naive_solve function generates solution candidates by calling get_samples once with an empty partial solution, producing n independent completions without any tree search, evaluation, or selection. It uses the same LLM configuration as solve() (monkey-patching gpt via functools.partial) but bypasses the entire BFS loop.

Usage

Called from run.py when args.naive_run is True. Used for IO baselines (--prompt_sample standard) and CoT baselines (--prompt_sample cot). Returns the same tuple format as solve() for compatibility with the experiment loop.

Code Reference

Source Location

Signature

def naive_solve(args, task, idx, to_print=True):
    """
    Generate solutions via direct sampling (no tree search).

    Args:
        args: argparse.Namespace with fields:
            - backend (str): LLM model name
            - temperature (float): sampling temperature
            - n_generate_sample (int): number of samples to generate
            - prompt_sample (str): 'standard' or 'cot'
        task: Task object with standard_prompt_wrap() and cot_prompt_wrap()
        idx (int): puzzle index
        to_print (bool): whether to print debug info

    Returns:
        tuple: (ys, info)
            - ys (list[str]): n_generate_sample independent completions
            - info (dict): empty dict {}
    """

Import

from tot.methods.bfs import naive_solve

I/O Contract

Inputs

Name Type Required Description
args argparse.Namespace Yes Must have backend, temperature, n_generate_sample, prompt_sample
task Task Yes Task object with prompt wrap methods
idx int Yes Puzzle index in the task dataset
to_print bool No Print debug info (default True)

Outputs

Name Type Description
ys list[str] List of n_generate_sample independent completion strings
info dict Empty dictionary (no search steps to log)

Usage Examples

IO Baseline on Game of 24

import argparse
from tot.tasks import get_task
from tot.methods.bfs import naive_solve

args = argparse.Namespace(
    backend='gpt-4',
    temperature=0.7,
    n_generate_sample=100,
    prompt_sample='standard',
    naive_run=True,
)

task = get_task('game24')
ys, info = naive_solve(args, task, 900)
# ys contains 100 independent IO completions
# info is {}
print(f"Generated {len(ys)} samples")

CoT Baseline on Creative Writing

args = argparse.Namespace(
    backend='gpt-4',
    temperature=1.0,
    n_generate_sample=10,
    prompt_sample='cot',
    naive_run=True,
)

task = get_task('text')
ys, info = naive_solve(args, task, 0)
# ys contains 10 independent CoT completions

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment