Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Eric mitchell Direct preference optimization Concatenated Inputs

From Leeroopedia


Knowledge Sources
Domains Data_Manipulation, Efficiency_Optimization, Deep_Learning
Last Updated 2026-02-08 02:00 GMT

Overview

A data preparation technique that pads and concatenates chosen and rejected sequence tensors along the batch dimension for efficient single-pass model inference.

Description

Input concatenation prepares chosen and rejected sequences for a single forward pass by:

  1. Finding the maximum sequence length across both chosen and rejected inputs
  2. Padding both to the same length (with appropriate padding values: 0 for input_ids/attention_mask, -100 for labels)
  3. Concatenating along the batch dimension (chosen first, then rejected)

This creates a single batch of size 2N from N chosen and N rejected sequences, enabling the concatenated forward pass optimization.

Usage

Use this principle whenever you need to process both chosen and rejected sequences through the same model efficiently. This is a prerequisite step for the concatenated forward pass.

Theoretical Basis

Padding to equal length is necessary because PyTorch tensors require uniform dimensions. The concatenation preserves the order (chosen first, rejected second) so that the results can be split deterministically after the forward pass.

Pseudo-code:

# Abstract input concatenation (NOT actual implementation)
max_len = max(chosen.shape[1], rejected.shape[1])
chosen_padded = pad(chosen, max_len)
rejected_padded = pad(rejected, max_len)
concatenated = stack(chosen_padded, rejected_padded, dim=0)

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment