Implementation:CarperAI Trlx DSL Interpreter
| Knowledge Sources | |
|---|---|
| Domains | Program_Synthesis, NLP, Data_Generation |
| Last Updated | 2026-02-07 16:00 GMT |
Overview
Concrete tool for interpreting and sampling programs in a toy list-manipulation DSL, used to generate synthetic datasets for grounded program synthesis experiments.
Description
The DSL Interpreter module defines a small domain-specific language for list manipulation (take, drop, sort, reverse, arithmetic operations, expand_copy). It provides an Interpreter class that evaluates composed expressions, a Sampler class that generates random multi-step programs, and a create_synthetic_dataset function that pairs input/output examples with the program that produced them. This module is used as the data generator for training language models to predict list-manipulation programs via PPO.
Usage
Use this module when setting up the grounded program synthesis experiment. The Interpreter evaluates DSL strings into Python list operations, the Sampler generates random valid programs of configurable depth, and create_synthetic_dataset produces train/test JSON datasets.
Code Reference
Source Location
- Repository: CarperAI_Trlx
- File: examples/experiments/grounded_program_synthesis/lang.py
- Lines: 1-395
Signature
class Interpreter:
def __init__(self) -> None:
"""Initializes the mapping from DSL function names to Python callables."""
def __call__(self, statement_string: str):
"""
Evaluates a DSL expression string and returns the result.
Args:
statement_string: A DSL expression (e.g. "sort_asc(take(INPUT, 3))").
Returns:
The evaluated Python result (list or int).
"""
class Sampler:
def __init__(
self,
max_sample_length: int = 5,
code_sep: str = ";",
interpreter_sep: str = "->",
):
"""
Args:
max_sample_length: Max number of chained function compositions per sample.
code_sep: Separator between chained expressions in output.
interpreter_sep: Separator between code and I/O examples.
"""
def sample_production(self, gen_length: int = 5) -> str:
"""Generates a random valid DSL program string of given depth."""
def create_synthetic_dataset(size: int, io_size: int = 3) -> dict:
"""
Generates a synthetic dataset of DSL programs with input/output examples.
Args:
size: Number of samples to generate.
io_size: Number of I/O example pairs per sample.
Returns:
Dict with 'samples' list of (prompt, label) pairs and 'meta'.
"""
Import
from examples.experiments.grounded_program_synthesis.lang import (
Interpreter,
Sampler,
create_synthetic_dataset,
)
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| statement_string | str | Yes | DSL expression to evaluate (Interpreter) |
| size | int | Yes | Number of dataset samples (create_synthetic_dataset) |
| io_size | int | No | Number of I/O pairs per sample (default 3) |
| max_sample_length | int | No | Max program depth (Sampler, default 5) |
| gen_length | int | No | Depth of generated program (sample_production, default 5) |
Outputs
| Name | Type | Description |
|---|---|---|
| Interpreter result | list or int | Evaluated DSL expression result |
| Sampler.sample_production result | str | Random DSL program string |
| create_synthetic_dataset result | dict | Dataset with 'samples' (list of prompt/label pairs) and 'meta' |
Usage Examples
Generate a Synthetic Dataset
from examples.experiments.grounded_program_synthesis.lang import (
Interpreter,
Sampler,
create_synthetic_dataset,
write_to_json,
)
# 1. Generate 1000 training samples with 3 I/O pairs each
dataset = create_synthetic_dataset(size=1000, io_size=3)
# 2. Save to JSON
write_to_json(dataset, "train_data.json")
# 3. Evaluate a DSL expression manually
interp = Interpreter()
result = interp("sort_asc(take(INPUT, 3))")
# result is a callable that takes an input list
# 4. Sample a random program
sampler = Sampler(max_sample_length=3)
program = sampler.sample_production(gen_length=3)
print(program) # e.g. "reverse(sort_asc(add_n(INPUT, 2)))"