Implementation:Spcl Graph of thoughts Improve Operation
| Knowledge Sources | |
|---|---|
| Domains | Graph_Reasoning, Thought_Operations |
| Last Updated | 2026-02-14 |
| Implements | Principle:Spcl_Graph_of_thoughts_Thought_Improvement |
Overview
Implementation of the single-pass thought improvement pattern that refines existing thought states by prompting a language model for improvements without validation.
Description
The Improve class is a concrete operation in the Graph of Thoughts framework that performs a single-pass refinement of each predecessor thought by prompting the language model. It is implemented as a subclass of Operation with operation type OperationType.improve.
The execution flow for each predecessor thought is:
- Generate an improve prompt via
prompter.improve_prompt(**thought.state), passing the current thought state as keyword arguments - Query the language model for exactly one response (
num_responses=1) - Parse the response via
parser.parse_improve_answer(thought.state, responses)to extract a state update dictionary - Create a new
Thoughtby merging the original state with the update:Thought({**thought.state, **state_update}) - Append the new thought to the output list
Unlike ValidateAndImprove, this operation does not validate results, does not iterate, and produces exactly one output thought per input thought.
Usage
from graph_of_thoughts.operations import Improve
# Create an Improve operation (no parameters needed)
improve = Improve()
# Wire into graph after an aggregation or selection step
improve.add_predecessor(aggregate_op)
Code Reference
Source Location
- File:
graph_of_thoughts/operations/operations.py, Lines 480-533 - Import:
from graph_of_thoughts.operations import Improve
Class Signature
class Improve(Operation):
operation_type: OperationType = OperationType.improve
def __init__(self) -> None:
"""
Initializes a new Improve operation.
"""
Key Methods
__init__(self) -> None-- Initializes the operation with an emptythoughtslist. Takes no configuration parameters.get_thoughts(self) -> List[Thought]-- Returns the list of improved thoughts after execution._execute(self, lm, prompter, parser, **kwargs) -> None-- Core execution logic: iterates over predecessor thoughts, prompts LM for improvement, parses state update, creates new merged thought.
Internal State
self.thoughts: List[Thought]-- Stores the improved thoughts after execution. One output thought per input thought.
I/O Contract
| Input | Output | Side Effects |
|---|---|---|
| Predecessor thoughts from one or more predecessor operations. Each thought carries a state dictionary to be improved. | Improved thoughts with state updates merged -- one new Thought per input thought. Each output thought's state is {**original_state, **state_update} where state_update is parsed from the LM response.
|
Queries the language model once per input thought via lm.query() with num_responses=1. Logs prompts and responses at DEBUG level, count of improved thoughts at INFO level.
|
State merge logic:
state_update = parser.parse_improve_answer(thought.state, responses)
self.thoughts.append(Thought({**thought.state, **state_update}))
Assertions:
- At least one predecessor must exist (
len(self.predecessors) > 0)
Usage Examples
Sorting: Post-Merge Refinement
from graph_of_thoughts.operations import Aggregate, Score, KeepBestN, Improve
# After merging sorted sublists, refine the result
agg = Aggregate(num_responses=1)
score1 = Score(scoring_function=num_errors)
keep1 = KeepBestN(n=1, higher_is_better=False)
# Improve the best merged result
improve = Improve()
improve.add_predecessor(keep1)
# Then score and keep best again
score2 = Score(scoring_function=num_errors)
keep2 = KeepBestN(n=1, higher_is_better=False)
score2.add_predecessor(improve)
keep2.add_predecessor(score2)
Simple Refinement Pipeline
from graph_of_thoughts.operations import Generate, Improve
# Generate an initial answer, then improve it once
gen = Generate(num_branches_prompt=1, num_branches_response=1)
improve = Improve()
improve.add_predecessor(gen)
# The Prompter's improve_prompt method determines what kind
# of improvement is requested (domain-specific).
Related Pages
- Principle:Spcl_Graph_of_thoughts_Thought_Improvement - The principle this implementation realizes
- Implementation:Spcl_Graph_of_thoughts_ValidateAndImprove_Operation - More complex variant with validation loop
- Implementation:Spcl_Graph_of_thoughts_Aggregate_Operation - Aggregate often precedes Improve for post-merge refinement
- Workflow:Spcl_Graph_of_thoughts_GoT_Sorting_Pipeline - Sorting workflow using Improve as a refinement step