Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:CarperAI Trlx Reward Function Design

From Leeroopedia


Knowledge Sources
Domains Reinforcement_Learning, Reward_Modeling, NLP
Last Updated 2026-02-07 16:00 GMT

Overview

A design principle for creating reward functions that score language model generations during online reinforcement learning training.

Description

In RLHF, the reward function is the critical signal that guides the language model toward desired behavior. During PPO training, the model generates text from prompts, and a reward function scores each generation. These scores drive the policy gradient updates. Reward functions can be rule-based (e.g., sentiment classifiers), model-based (trained reward models), or hybrid approaches.

The reward function must satisfy a specific interface: it receives generated text samples and optionally the original prompts and decoded outputs, and returns a list of scalar rewards. The quality and calibration of the reward function directly determines the quality of the RL-trained model.

Usage

Design a reward function when setting up online PPO training with trlx.train(). The reward function is passed as the reward_fn argument and is called during each rollout to score generated samples. Choose this approach when you have a computable measure of text quality (sentiment, toxicity, factual accuracy, format compliance).

Theoretical Basis

The reward function maps generated text to scalar values:

r:𝒴

In PPO, the reward signal is combined with a KL penalty to prevent reward hacking:

R(x,y)=r(y)βKL(πθ(y|x)πref(y|x))

Design considerations:

  • Scale: Rewards should be bounded and normalized to prevent instability
  • Density: Per-sample scalar rewards (not sparse episode-level rewards)
  • Calibration: Higher values should correspond to genuinely better text
  • Speed: The reward function is called at every rollout step, so inference speed matters

Common reward function patterns:

  • Rule-based: Sentiment classifiers, regex matching, length constraints
  • Model-based: Trained reward models that predict human preferences
  • Delta rewards: Compute reward difference between generated and reference output

Related Pages

Implemented By

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment