Principle:Lance format Lance Fragment Rewriting
| Knowledge Sources | |
|---|---|
| Domains | Data_Engineering, Storage_Optimization |
| Last Updated | 2026-02-08 19:00 GMT |
Overview
Fragment rewriting is the execution phase of compaction where one or more input fragments are read, their data is re-encoded into new optimally-sized fragments, and a mapping of old-to-new row addresses is produced for downstream index remapping.
Description
Once a compaction plan has been produced, each task must be executed to physically rewrite the data. Fragment rewriting takes a group of source fragments, scans all live rows (skipping deleted rows), and writes them into new fragment files that meet the target size constraints. This process eliminates deletion file overhead, merges undersized fragments, and ensures the new fragments conform to the current dataset schema (including any schema evolution such as dropped columns).
Two execution paths exist:
- Standard rewrite: The input fragments are scanned row-by-row through a
Scanner, producing a stream ofRecordBatchobjects. This stream is fed to the fragment writer, which produces new data files with fresh encoding. This path handles all edge cases including schema evolution and blob columns.
- Binary copy: When eligible (non-legacy storage format, no deletion files, identical schemas across all input data files, no extra global buffers, no blob columns), the data pages are copied verbatim from input files to the output file without re-encoding. This is significantly faster but cannot merge pages, so it is best suited for materializing deletions rather than merging many small fragments.
In both paths, row IDs are captured during the scan. After writing, new fragment IDs are reserved through a lightweight commit, and a row ID mapping (old address to new address) is constructed. This mapping is essential for remapping vector and scalar indices that reference the old row addresses.
Usage
Fragment rewriting is used:
- As the middle step of the three-phase compaction workflow: plan, execute, commit.
- In distributed compaction, where each worker receives a serialized
CompactionTaskand callsexecute()to produce aRewriteResult. - In the all-in-one
compact_files()convenience function, where execution happens automatically between planning and committing.
Theoretical Basis
Fragment rewriting is conceptually a streaming transformation:
for each task in plan:
if binary_copy_eligible(task.fragments):
new_fragments = copy_pages(task.fragments)
else:
stream = scan(task.fragments, skip_deleted=true)
new_fragments = write(stream, target_rows_per_fragment)
reserve_fragment_ids(new_fragments)
row_id_map = transpose(old_row_addrs, old_fragments, new_fragments)
emit RewriteResult(new_fragments, row_id_map, metrics)
Key invariants maintained during rewriting:
- Row order preservation: Rows within each task maintain their original scan order, preserving insertion order across the dataset.
- Atomicity: New fragments are written to new file paths. The old fragments remain untouched until the commit phase, ensuring that a failed rewrite does not corrupt the dataset.
- Fragment ID reservation: New fragment IDs are reserved through a
ReserveFragmentsoperation to avoid ID collisions with concurrent writers. - Row address transposition: The mapping from old
RowAddress(fragment_id, row_offset)to newRowAddress(new_fragment_id, new_row_offset)is computed deterministically from the scan order, enabling correct index remapping.