Heuristic:Princeton nlp Tree of thought llm Functools Partial Model Binding
| Knowledge Sources | |
|---|---|
| Domains | API_Design, Infrastructure |
| Last Updated | 2026-02-14 04:00 GMT |
Overview
Uses `functools.partial` to monkey-patch the global `gpt` function with experiment-specific model and temperature, avoiding parameter threading through the entire call stack.
Description
At the start of each experiment run, solve() and naive_solve() rebind the module-level gpt function using functools.partial, fixing the model and temperature parameters for the duration of the experiment. This means all downstream functions (get_proposals, get_values, get_votes, get_samples) can call gpt(prompt, n=...) without needing to know or pass the model/temperature — those are already baked in. This is a form of implicit configuration via monkey-patching.
Usage
This pattern is applied automatically at the start of every solve() and naive_solve() call. Developers extending the framework should be aware that the gpt reference inside bfs.py is not the original function after the first call — it is a partially applied version. Adding new functions that call gpt within bfs.py will automatically inherit the bound parameters.
The Insight (Rule of Thumb)
- Action: At experiment start, execute `global gpt; gpt = partial(gpt, model=args.backend, temperature=args.temperature)`.
- Value: Eliminates the need to thread `model` and `temperature` through every function call in the search loop.
- Trade-off: Mutates global state, making the code harder to reason about. The gpt variable in bfs.py changes meaning after solve() runs. Cannot run two experiments with different models in the same process without resetting.
Reasoning
The BFS search loop calls gpt() indirectly through multiple layers: solve → get_proposals/get_samples → gpt and solve → get_values/get_votes → get_value → gpt. Threading model and temperature through every intermediate function would add parameters to 5+ function signatures. The functools.partial approach trades code clarity for brevity — a pragmatic choice for a research codebase where experiments are run as standalone scripts.
Code Evidence
Monkey-patching in `src/tot/methods/bfs.py:49-51`:
def solve(args, task, idx, to_print=True):
global gpt
gpt = partial(gpt, model=args.backend, temperature=args.temperature)
Same pattern in `src/tot/methods/bfs.py:90-92`:
def naive_solve(args, task, idx, to_print=True):
global gpt
gpt = partial(gpt, model=args.backend, temperature=args.temperature)