Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:LLMBook zh LLMBook zh github io Preference Data Preparation

From Leeroopedia


Knowledge Sources
Domains NLP, Alignment, Data_Engineering
Last Updated 2026-02-08 00:00 GMT

Overview

A data processing technique that splits conversational preference data into prompt, chosen response, and rejected response columns for DPO training.

Description

Preference Data Preparation transforms raw preference datasets (like Anthropic's HH-RLHF) into the format required by DPO training. Each example in the raw dataset contains a full conversation with a chosen and rejected response. The preparation step extracts the prompt portion and separates the chosen and rejected responses, splitting at the assistant turn delimiter.

Usage

Use this principle when preparing data for DPO or RLHF training. The output format (prompt, chosen, rejected) is the standard expected by TRL's DPOTrainer.

Theoretical Basis

The data transformation follows:

  1. Load the raw preference dataset (e.g., Anthropic/hh-rlhf).
  2. For each example, find the last assistant turn delimiter.
  3. Split: everything before the delimiter is the prompt; everything after is the response.
  4. Apply to both chosen and rejected columns.

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment