Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:LLMBook zh LLMBook zh github io Reward Modeling

From Leeroopedia


Knowledge Sources
Domains NLP, Alignment, Reinforcement_Learning
Last Updated 2026-02-08 00:00 GMT

Overview

A technique that trains a scalar reward model on human preference data to score language model outputs for reinforcement learning alignment.

Description

Reward Modeling trains a model to predict human preferences between pairs of model outputs. Given a prompt and two responses (one preferred, one rejected), the reward model learns to assign higher scalar rewards to preferred responses. The trained reward model then provides the reward signal for reinforcement learning algorithms like PPO.

The approach uses contrastive learning: the model computes rewards for both responses and optimizes a binary cross-entropy loss on the reward difference. An optional language modeling loss serves as a regularization term to prevent the reward model from forgetting language understanding.

Usage

Use this principle when implementing the RLHF (Reinforcement Learning from Human Feedback) pipeline. The reward model is trained before the RL fine-tuning stage and is used to score model outputs during PPO/GRPO training.

Theoretical Basis

Given a preferred response yw and a rejected response yl:

  1. Compute scalar rewards: rw=fθ(x,yw) and rl=fθ(x,yl)
  2. Compute contrastive loss:

RM=logσ(rwrl)

This is equivalent to binary cross-entropy on the reward difference.

Regularization: An optional language modeling loss prevents catastrophic forgetting:

=RM+LM

Related Pages

Implemented By

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment