Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Vllm project Vllm CPU Pos Encoding

From Leeroopedia


Knowledge Sources
Domains Position_Encoding, CPU_Inference
Last Updated 2026-02-08 00:00 GMT

Overview

Implements Rotary Position Embedding (RoPE) for CPU inference with vectorized complex multiplication, supporting both NeoX-style and GPT-J-style interleaving patterns.

Description

This file provides two rotary embedding implementations: rotary_embedding_impl uses NeoX-style layout where the first and second halves of each head represent the real and imaginary components, while rotary_embedding_gptj_impl uses GPT-J-style interleaved layout where adjacent pairs form complex numbers. Both implementations apply cos/sin-based rotation to query and key tensors using FP32Vec8 vectorization with OpenMP parallelization across tokens.

Usage

These functions are compiled into the vLLM CPU extension and invoked from the Python layer via the rotary_embedding entry point. The is_neox boolean flag selects between the two interleaving styles at runtime to support different model architectures (LLaMA uses NeoX-style, GPT-J uses interleaved).

Code Reference

Source Location

Signature

void rotary_embedding(torch::Tensor& positions, torch::Tensor& query,
                      std::optional<torch::Tensor> key, int64_t head_size,
                      torch::Tensor& cos_sin_cache, bool is_neox);

Import

#include "cpu_types.hpp"

I/O Contract

Inputs

Name Type Required Description
positions torch::Tensor Yes Token position indices [num_tokens] used to index into cos_sin_cache
query torch::Tensor Yes Query tensor [num_tokens, num_heads * head_size] modified in-place
key std::optional<torch::Tensor> No Key tensor [num_tokens, num_kv_heads * head_size] modified in-place; nullptr to skip key rotation
head_size int64_t Yes Dimension of each attention head
cos_sin_cache torch::Tensor Yes Pre-computed cosine and sine values [max_position, rot_dim]
is_neox bool Yes If true, use NeoX-style (split-half) layout; if false, use GPT-J-style (interleaved) layout

Outputs

Name Type Description
query torch::Tensor Query tensor with rotary embeddings applied in-place
key torch::Tensor Key tensor with rotary embeddings applied in-place (if provided)

Usage Examples

// Apply NeoX-style rotary embedding to query and key
torch::Tensor positions = torch::arange(0, seq_len, torch::kLong);
torch::Tensor query = torch::randn({num_tokens, num_heads * head_size});
torch::Tensor key = torch::randn({num_tokens, num_kv_heads * head_size});
torch::Tensor cos_sin_cache = torch::randn({max_position, rot_dim});

rotary_embedding(positions, query, key, head_size,
                 cos_sin_cache, /*is_neox=*/true);

// Apply GPT-J-style rotary embedding (query only)
rotary_embedding(positions, query, std::nullopt, head_size,
                 cos_sin_cache, /*is_neox=*/false);

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment