Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Hpcaitech ColossalAI GRPO Policy Loss

From Leeroopedia


Knowledge Sources
Domains Reinforcement_Learning, Optimization
Last Updated 2026-02-09 00:00 GMT

Overview

A policy gradient loss function that uses PPO-style clipping with group-relative advantages and a KL divergence penalty for stable reinforcement learning.

Description

The GRPO Policy Loss combines PPO's clipped surrogate objective with a per-token KL divergence penalty against a reference model. The advantages are computed group-relatively (normalized within each prompt's group of generations), eliminating the need for a learned value function. The loss supports both sample-level and token-level aggregation.

Usage

Use this loss function inside the GRPO consumer's training step to update the policy model based on collected experiences.

Theoretical Basis

The clipped policy loss with KL penalty:

=1Nimin(ri(θ)Ai,clip(ri(θ),1ϵ,1+ϵ)Ai)+βKLtoken

Where ri(θ)=πθ(ai|si)πold(ai|si) is the importance sampling ratio.

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment