Principle:LMCache LMCache KV Cache Storage
| Knowledge Sources | |
|---|---|
| Domains | Caching, Memory_Management |
| Last Updated | 2026-02-09 00:00 GMT |
Overview
A chunked prefix-based caching strategy that stores KV cache tensors from GPU inference into multi-tier storage for later reuse.
Description
KV Cache Storage is the process of extracting key-value attention tensors from a GPU serving engine's paged memory and persisting them in external storage (CPU RAM, disk, or remote). The KV cache is split into fixed-size token chunks (default 256 tokens), each identified by a content-addressable hash of its token prefix. This chunking enables prefix-based sharing: if two requests share the first N tokens, their KV caches for those tokens are identical and can be stored once.
The storage flow:
- Token database maps the input token sequence to chunk boundaries and keys
- Memory objects are allocated from the storage backend's allocator
- GPU connector extracts KV tensors from GPU paged memory into CPU memory objects
- Storage manager distributes memory objects to configured backends (CPU, disk, remote)
Usage
Use this principle whenever you want to cache KV tensors for reuse. It is triggered automatically by the vLLM connector after each inference request completes (unless force_skip_save is set). The store operation is non-blocking for remote backends and runs on a background stream for GPU-to-CPU transfers.
Theoretical Basis
The chunking strategy uses prefix-aware hashing:
# Pseudocode for chunk key generation
for chunk in split_tokens_by_chunk_size(tokens, chunk_size=256):
chunk_hash = hash(chunk_tokens) # Content-addressable
key = CacheEngineKey(chunk_hash=chunk_hash, fmt=memory_format)
yield (start, end, key)
Key properties:
- Deterministic: Same token sequence always produces same chunk keys
- Prefix-sharing: If two sequences share a prefix, their prefix chunks have identical keys
- Incremental: Only new chunks (not already in cache) need to be stored