Implementation:Iamhankai Forest of Thought CoT Task Run
Appearance
| Knowledge Sources | |
|---|---|
| Domains | Reasoning, NLP |
| Last Updated | 2026-02-14 03:00 GMT |
Overview
Concrete tool for executing single-pass Chain-of-Thought reasoning provided by the Forest-of-Thought repository.
Description
The CoT_Task class extends SearchTask to implement Chain-of-Thought reasoning. It generates a single reasoning path via one LLM call, extracts a summary answer using dataset-specific extraction logic, and optionally applies self-criticism scoring. This is the simplest of the three base reasoning modes in FoT.
Usage
Instantiated by Monte_Carlo_Forest.cot_run() or directly by get_cot_answer() in the main orchestrator. Used when base_mode is set to cot in experiment configuration.
Code Reference
Source Location
- Repository: Forest-of-Thought
- File: methods/cot/task.py
- Lines: L8-180
Signature
class CoT_Task(SearchTask):
def __init__(
self, data, propose_method='glm', value_method='glm',
temperature=0.7, max_tokens=2048, seed=170,
max_length=2048, truncation=True, do_sample=True,
max_new_tokens=1024, evaluate='', summary=False,
lang='en', answer=None, verify_method='string',
do_self_critic=False
):
"""
Args:
data: Input problem text.
evaluate (str): Dataset type (math/scibench/scieval).
summary (bool): Generate explicit summary.
lang (str): Language ('en' or 'zh').
answer: Ground truth for verification.
do_self_critic (bool): Enable self-criticism scoring.
"""
def run(self) -> dict:
"""
Execute single-pass CoT reasoning.
Returns:
dict: Keys: content, solution, summary, accurate,
real_answer, self_critic (if enabled).
"""
Import
from methods.cot.task import CoT_Task
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| data | str | Yes | Problem text to solve |
| evaluate | str | No | Dataset type for answer extraction (default: ) |
| lang | str | No | Language: en or zh (default: en) |
| answer | str | No | Ground truth for verification |
| do_self_critic | bool | No | Enable self-criticism confidence scoring (default: False) |
Outputs
| Name | Type | Description |
|---|---|---|
| result | dict | Keys: content (question), solution (full reasoning), summary (final answer), accurate (bool), real_answer (ground truth) |
Usage Examples
from methods.cot.task import CoT_Task
task = CoT_Task(
data="A train travels 60 km/h for 2 hours. How far does it go?",
evaluate='math',
lang='en',
answer="120"
)
result = task.run()
print(f"Solution: {result['solution']}")
print(f"Answer: {result['summary']}")
print(f"Correct: {result['accurate']}")
Related Pages
Implements Principle
Requires Environment
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment