Principle:Iamhankai Forest of Thought Chain of Thought Reasoning
| Knowledge Sources | |
|---|---|
| Domains | Reasoning, NLP |
| Last Updated | 2026-02-14 03:00 GMT |
Overview
A prompting technique that elicits step-by-step reasoning from language models to improve performance on complex multi-step problems.
Description
Chain-of-Thought (CoT) prompting encourages LLMs to generate intermediate reasoning steps before arriving at a final answer. Instead of directly outputting an answer, the model is prompted to "think step by step," decomposing the problem into manageable sub-steps. In the FoT framework, CoT serves as the simplest base reasoning mode, producing a single reasoning path per tree without branching or search.
CoT is used as:
- Baseline mode: For comparison against more sophisticated search methods (MCTS, ToT)
- Fast inference: When computational budget is limited and single-pass generation suffices
- Self-correction variant: Can be combined with confidence scoring and re-prompting
Usage
Use this principle when the base search mode is set to cot. CoT is appropriate for simpler problems or when running ablation studies comparing single-pass vs. multi-tree reasoning.
Theoretical Basis
CoT exploits the observation that LLMs can solve complex problems more accurately when they externalize intermediate reasoning:
Key insight: For a problem P requiring k reasoning steps:
- Direct answering: P(correct) = p^k (probability drops exponentially)
- CoT: Each step is independently verifiable, reducing compound error
Pseudo-code:
# Abstract CoT reasoning
prompt = f"Solve step by step:\n{question}"
solution = llm.generate(prompt)
summary = extract_final_answer(solution)
In FoT, each CoT tree in the forest generates an independent solution, and diversity arises from stochastic sampling (temperature > 0).