Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Princeton nlp Tree of thought llm Usage Tracking

From Leeroopedia
Knowledge Sources
Domains Infrastructure, Experiment_Management
Last Updated 2026-02-14 03:30 GMT

Overview

A global accumulation mechanism that tracks API token consumption and computes estimated cost across all LLM calls during an experiment.

Description

Usage Tracking addresses the need to understand the computational cost of tree search experiments that make many LLM calls. Since the Tree of Thoughts approach requires multiple LLM calls per search step (generation + evaluation) across multiple depth levels and puzzle instances, the total token usage can be substantial. By tracking prompt and completion tokens globally, the framework enables:

  1. Cost estimation: Apply model-specific per-token pricing to compute dollar cost.
  2. Method comparison: Compare token efficiency between ToT BFS and naive baselines.
  3. Budgeting: Monitor spending across an experiment run in real time.

Usage

Use this principle at the end of an experiment run (or per-puzzle) to report accumulated token usage and cost. It is the final reporting step in both ToT BFS and baseline experiments.

Theoretical Basis

Token usage accumulates across all LLM calls during an experiment:

Failed to parse (syntax error): {\displaystyle \text{total\_cost} = \frac{\text{prompt\_tokens}}{1000} \times p_{\text{prompt}} + \frac{\text{completion\_tokens}}{1000} \times p_{\text{completion}} }

Where pprompt and pcompletion are model-specific per-1K-token prices:

Model Prompt ($/1K) Completion ($/1K)
gpt-4 $0.03 $0.06
gpt-3.5-turbo $0.0015 $0.002
gpt-4o $0.01 $0.0025

Related Pages

Implemented By

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment