Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:OpenRLHF OpenRLHF Weight Synchronization

From Leeroopedia


Knowledge Sources
Domains Distributed_Computing, Training_Infrastructure
Last Updated 2026-02-07 00:00 GMT

Overview

A communication pattern that broadcasts updated policy weights from training (DeepSpeed) workers to inference (vLLM) workers for on-policy generation.

Description

Weight Synchronization solves the key challenge in distributed PPO: the training workers update the policy using DeepSpeed, but the generation workers use vLLM for fast inference. After each PPO update, the new policy weights must be transferred from the DeepSpeed-sharded training model to the vLLM engines so that subsequent generations use the updated policy.

This involves gathering sharded parameters from DeepSpeed (ZeRO-3), serializing them, and loading them into vLLM's model via its weight update API.

Usage

Called after each PPO training step, before the next generation round. Critical for maintaining on-policy training.

Theoretical Basis

On-policy training requires: πgenerate=πtrain

Without weight sync, generated samples become off-policy, degrading PPO performance. The sync frequency (every N steps) trades off between generation freshness and communication overhead.

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment