Principle:Hpcaitech ColossalAI GRPO Policy Loss
| Knowledge Sources | |
|---|---|
| Domains | Reinforcement_Learning, Optimization |
| Last Updated | 2026-02-09 00:00 GMT |
Overview
A policy gradient loss function that uses PPO-style clipping with group-relative advantages and a KL divergence penalty for stable reinforcement learning.
Description
The GRPO Policy Loss combines PPO's clipped surrogate objective with a per-token KL divergence penalty against a reference model. The advantages are computed group-relatively (normalized within each prompt's group of generations), eliminating the need for a learned value function. The loss supports both sample-level and token-level aggregation.
Usage
Use this loss function inside the GRPO consumer's training step to update the policy model based on collected experiences.
Theoretical Basis
The clipped policy loss with KL penalty:
Where is the importance sampling ratio.