Principle:Hpcaitech ColossalAI GRPO Reward Configuration
| Knowledge Sources | |
|---|---|
| Domains | Reinforcement_Learning, NLP |
| Last Updated | 2026-02-09 00:00 GMT |
Overview
A reward engineering pattern for configuring verifiable reward functions and prompt datasets that drive reinforcement learning from verifiable rewards (RLVR).
Description
GRPO Reward Configuration sets up the reward signal that guides policy optimization. Unlike traditional RLHF which uses a learned reward model, RLVR uses verifiable reward functions that can deterministically evaluate response correctness (e.g., checking math answers, validating code execution). Multiple reward functions can be composed to score different aspects of quality.
The dataset consists of prompts with ground-truth answers, enabling the reward functions to verify response correctness by comparing generated text against expected answers.
Usage
Use this principle when setting up GRPO training for tasks where correctness is objectively verifiable, such as mathematical reasoning, code generation, or structured output tasks.
Theoretical Basis
The reward configuration defines:
Where are individual reward functions and is the ground-truth answer. Common reward functions include:
- Math reward: Extract numerical answer and compare to ground truth
- Code reward: Execute generated code against test cases
- Format reward: Check response follows required structure