Implementation:Fastai Fastbook Tokenizer
Appearance
| Knowledge Sources | |
|---|---|
| Domains | Natural Language Processing, Text Preprocessing |
| Last Updated | 2026-02-09 17:00 GMT |
Overview
Concrete tool for splitting raw text into token sequences with special token augmentation, provided by the fastai library.
Description
The fastai tokenization system consists of two layers:
- WordTokenizer: A wrapper around the spaCy tokenizer that handles the base word-level splitting. It loads a spaCy language model and delegates tokenization to spaCy's rule-based and statistical tokenizer.
- Tokenizer: The orchestrator class that wraps a base tokenizer (such as WordTokenizer) and applies pre-rules and post-rules to handle HTML cleanup, special token insertion, and lowercasing. It also supports parallel processing for tokenizing large corpora efficiently.
The Tokenizer class applies the following default rules in order:
- fix_html - Converts HTML entities to their text equivalents
- replace_rep - Replaces character repetitions with xxrep + count
- replace_wrep - Replaces word repetitions with xxwrep + count
- spec_add_spaces - Adds spaces around special characters
- rm_useless_spaces - Collapses multiple spaces
- replace_all_caps - Replaces all-caps words with xxup + lowercased word
- replace_maj - Replaces capitalized words with xxmaj + lowercased word
- lowercase - Lowercases all remaining tokens
Usage
Use these classes when you need fine-grained control over the tokenization process, or when you want to tokenize text outside of the DataBlock API. For most workflows, tokenization is handled automatically by TextBlock, but direct use is valuable for debugging and inspection.
Code Reference
Source Location
- Repository: fastbook
- File: translations/cn/10_nlp.md (lines 118-234)
- Library module: fastai.text.core
Signature
class WordTokenizer():
"Tokenizes text using spaCy's tokenizer"
def __init__(self, lang='en'):
...
def __call__(self, items: list) -> list:
...
class Tokenizer():
"Tokenizer that applies rules and a base tokenizer"
def __init__(
self,
tok: callable = None, # Base tokenizer (default: WordTokenizer)
rules: list = None, # List of pre/post processing rules
counter: Counter = None, # Token counter for vocabulary
lengths: list = None, # Sequence lengths
mode: str = None, # Tokenization mode
sep: str = ' ' # Token separator
):
...
def __call__(self, items: list) -> L:
...
Import
from fastai.text.all import WordTokenizer, Tokenizer
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| tok | callable | No | Base tokenizer instance. Defaults to WordTokenizer() which uses spaCy. |
| rules | list | No | List of text processing rules. Defaults to fastai's standard NLP rules. |
| items | list of str | Yes | List of raw text strings to tokenize (passed to __call__). |
Outputs
| Name | Type | Description |
|---|---|---|
| tokens | L (list of list of str) | An L list where each element is a list of token strings including special tokens (xxbos, xxmaj, xxup, xxrep, xxwrep, etc.). All tokens are lowercased. |
Usage Examples
Basic Usage
from fastai.text.all import WordTokenizer, Tokenizer
# Create a tokenizer with the default WordTokenizer
tok = Tokenizer(WordTokenizer())
# Tokenize a single sentence
result = tok(["This movie was AMAZING!!!"])
print(result[0])
# Output: ['xxbos', 'xxmaj', 'this', 'movie', 'was', 'xxup', 'amazing', 'xxrep', '3', '!']
Tokenizing Multiple Texts
from fastai.text.all import WordTokenizer, Tokenizer
tok = Tokenizer(WordTokenizer())
texts = [
"This is a great film.",
"I HATED every minute of it.",
"Wow wow wow, what a movie!!!"
]
tokens = tok(texts)
for t in tokens:
print(t)
# ['xxbos', 'xxmaj', 'this', 'is', 'a', 'great', 'film', '.']
# ['xxbos', 'xxmaj', 'i', 'xxup', 'hated', 'every', 'minute', 'of', 'it', '.']
# ['xxbos', 'xxmaj', 'xxwrep', '3', 'wow', ',', 'what', 'a', 'movie', 'xxrep', '3', '!']
Inspecting Special Tokens
from fastai.text.all import WordTokenizer, Tokenizer
tok = Tokenizer(WordTokenizer())
# Demonstrate all special token types
text = "HELLO World... the the the amazing!!!!!"
result = tok([text])
print(result[0])
# ['xxbos', 'xxup', 'hello', 'xxmaj', 'world', '...', 'xxwrep', '3', 'the', 'xxrep', '5', 'amazing', '!']
Using with File Paths
from fastai.text.all import WordTokenizer, Tokenizer
from fastai.data.external import untar_data, URLs
from fastai.data.transforms import get_text_files
path = untar_data(URLs.IMDB)
files = get_text_files(path, folders=['train'])
# Read and tokenize a batch of files
texts = [f.read_text() for f in files[:5]]
tok = Tokenizer(WordTokenizer())
tokenized = tok(texts)
# Inspect first 20 tokens of the first review
print(tokenized[0][:20])
Related Pages
Implements Principle
Requires Environment
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment