Principle:LaurentMazare Tch rs BPE Tokenization
| Knowledge Sources | |
|---|---|
| Domains | NLP, Text_Processing |
| Last Updated | 2026-02-08 14:00 GMT |
Overview
Subword tokenization algorithm that iteratively merges the most frequent byte pairs to encode text into a fixed vocabulary of variable-length tokens.
Description
Byte Pair Encoding (BPE) tokenization converts raw text into a sequence of integer token IDs from a fixed vocabulary. Starting with individual bytes, the algorithm iteratively merges the most frequent adjacent pairs according to learned merge rules. This balances vocabulary size with token granularity: common words become single tokens, while rare words are broken into subword pieces. SentencePiece adds a space prefix convention where spaces are encoded as underscores at the beginning of words.
Usage
Use as the text preprocessing step before feeding prompts to language models. The tokenizer must match the one used during model training (e.g., LLaMA uses SentencePiece BPE with 32000 tokens).
Theoretical Basis
BPE Algorithm:
1. Start with byte-level vocabulary: [a, b, c, ..., ▁]
2. Count all adjacent byte pairs in training corpus
3. Merge most frequent pair into new token
4. Repeat until vocabulary reaches target size
Encoding:
"Hello world" → "▁Hello" "▁world" → [token_ids...]
Apply merges in priority order until no more merges possible
Map resulting tokens to integer IDs via vocabulary lookup