Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Princeton nlp Tree of thought llm Gpt Usage

From Leeroopedia
Knowledge Sources
Domains Infrastructure, Experiment_Management
Last Updated 2026-02-14 03:30 GMT

Overview

Concrete tool for reporting accumulated API token usage and estimated cost provided by the Tree of Thoughts models module.

Description

The gpt_usage function reads the module-level global variables completion_tokens and prompt_tokens (accumulated by chatgpt during every LLM call), applies model-specific pricing, and returns a summary dictionary. It is called both per-puzzle (stored in JSON logs) and at experiment completion (printed to stdout).

Usage

Import and call this function at the end of an experiment run or after each puzzle to get a snapshot of cumulative token usage and cost. Called in run.py:L27 (per-puzzle logging) and run.py:L40 (final report).

Code Reference

Source Location

Signature

def gpt_usage(backend="gpt-4"):
    """
    Report accumulated token usage and estimated cost.

    Args:
        backend (str): Model name for pricing lookup.
            'gpt-4': $0.03/$0.06 per 1K tokens (prompt/completion).
            'gpt-3.5-turbo': $0.0015/$0.002 per 1K tokens.
            'gpt-4o': $0.01/$0.0025 per 1K tokens.

    Returns:
        dict: {
            'completion_tokens': int,
            'prompt_tokens': int,
            'cost': float
        }
    """

Import

from tot.models import gpt_usage

I/O Contract

Inputs

Name Type Required Description
backend str No Model name for pricing lookup (default 'gpt-4')

Outputs

Name Type Description
return dict Contains 'completion_tokens' (int), 'prompt_tokens' (int), and 'cost' (float in USD)

Usage Examples

Reporting Usage After Experiment

from tot.models import gpt_usage

# After running an experiment with gpt-4
usage = gpt_usage("gpt-4")
print(f"Prompt tokens: {usage['prompt_tokens']}")
print(f"Completion tokens: {usage['completion_tokens']}")
print(f"Estimated cost: ${usage['cost']:.2f}")
# e.g., "Estimated cost: $4.52"

Per-Puzzle Usage Logging

import json
from tot.models import gpt_usage

# During experiment loop, log usage snapshot per puzzle
info = {'idx': i, 'ys': ys, 'usage_so_far': gpt_usage(args.backend)}
logs.append(info)
with open(log_file, 'w') as f:
    json.dump(logs, f, indent=4)

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment