Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Iamhankai Forest of Thought Monte Carlo Forest

From Leeroopedia
Knowledge Sources
Domains Reasoning, Search_Algorithms, Ensemble_Methods
Last Updated 2026-02-14 03:00 GMT

Overview

Concrete tool for orchestrating multi-tree reasoning forests provided by the Forest-of-Thought repository.

Description

The Monte_Carlo_Forest class is the top-level orchestrator for FoT evaluation. It manages a configurable number of reasoning trees, dispatches to the appropriate base search algorithm (MCTS via mctsr_run, ToT via tot_run, or CoT via cot_run), tracks per-tree answers and activation signals, implements early stopping via majority consensus, and delegates final answer selection to get_fot_final_answer.

Usage

Instantiate with parsed arguments, then call run() for each dataset example in the evaluation loop. The run() method returns per-example results including the forest's final answer, per-tree answers, and correctness flags.

Code Reference

Source Location

Signature

class Monte_Carlo_Forest:
    def __init__(self, args) -> None:
        """
        Args:
            args (argparse.Namespace): Configuration with:
                - max_iter (int): MCTS iterations per tree
                - dataset (str): Dataset identifier
                - model_path (str): LLM checkpoint path
                - tree_nums (int): Number of trees in forest
                - output_dir (str): Output directory
                - stop (str): Stopping strategy (cgdm/random/majority/score)
                - base_mode (str): Base algorithm (mcts/cot/tot)
        """

    def run(self, example) -> list:
        """
        Execute forest search on a single dataset example.

        Args:
            example: Dataset example with query and ground truth.

        Returns:
            list[dict]: Per-example results with keys:
                - query (str): Original question
                - ground_truth (str): Expected answer
                - fot_ans (str): Forest's final answer
                - trees_ans (list): Per-tree answers
                - is_correct (bool): Correctness flag
                - correct_num (int): Running accuracy counter
        """

    def get_fot_final_answer(
        self, query, activated_answers_list,
        total_answers_list, activated_answer_scores=[],
        t=-1, fot=False
    ) -> str:
        """
        Select final answer using configured stopping strategy.

        Args:
            query (str): Original question
            activated_answers_list (list): Extracted answers from trees
            total_answers_list (list): All raw tree outputs
            activated_answer_scores (list): Reward scores
            t (int): Tree index
            fot (bool): Forest-of-trees context flag

        Returns:
            str: Selected best answer
        """

Import

# Monte_Carlo_Forest is defined inline in the main script
# Not importable as a separate module
from run_with_mcf_stop_noearly import Monte_Carlo_Forest

I/O Contract

Inputs

Name Type Required Description
args argparse.Namespace Yes Full experiment configuration from parse_args()
example datasets.Dataset row Yes Single dataset example with query and ground truth fields

Outputs

Name Type Description
results list[dict] Per-example result dictionaries with query, ground_truth, fot_ans, trees_ans, is_correct, correct_num
JSON files Files Per-example JSON records written to output_dir/{dataset}/jsons/{hash}.json

Usage Examples

Basic Forest Evaluation

from models.load_local_model import Pipeline
from utils.utils import mcts_load_data

# 1. Setup
args = parse_args()
client = Pipeline(model_id=args.model_path, model_type=args.model_type)
dataset = mcts_load_data(args)

# 2. Create forest
mcf = Monte_Carlo_Forest(args)

# 3. Evaluate each example
for i, example in enumerate(dataset):
    results = mcf.run(example)
    for r in results:
        print(f"Q: {r['query'][:50]}... A: {r['fot_ans']} Correct: {r['is_correct']}")

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment