Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Heuristic:Risingwavelabs Risingwave Docker Memory Allocation

From Leeroopedia



Knowledge Sources
Domains Deployment, Optimization, Infrastructure
Last Updated 2026-02-09 08:00 GMT

Overview

Docker deployments should allocate 28GB+ memory with specific per-component splits: compute gets 71% (20GB), frontend and compactor each get 14% (4GB), and the memory manager target should be set to ~98% of compute allocation to allow headroom.

Description

RisingWave's Docker deployment requires careful memory allocation across its components (compute, frontend, compactor). The official documentation provides a scaling table for different memory profiles. The key insight is that the compute node is the largest consumer and should receive approximately 71% of total allocation, while the memory manager target should be set slightly below the compute allocation (leaving ~2% headroom for JVM and OS overhead). Under-allocating memory causes OOM kills; over-allocating to the memory manager can cause the process to exceed the container limit.

Usage

Apply this heuristic when sizing Docker containers for RisingWave, diagnosing OOM kills in containerized deployments, or optimizing memory usage for specific workloads. This is essential tribal knowledge for any production Docker deployment.

The Insight (Rule of Thumb)

  • Minimum Docker Memory: 8GB (bare minimum for development).
  • Recommended Docker Memory: 28GB (default in docker-compose.yml).
  • Memory Split Table:
Docker Memory Compute Frontend Compactor Memory Manager Target
8 GiB 6 GiB 1 GiB 1 GiB 5.6 GiB
14 GiB 10 GiB 2 GiB 2 GiB 9.8 GiB
28 GiB 20 GiB 4 GiB 4 GiB 20.8 GiB
58 GiB 46 GiB 6 GiB 6 GiB 44.8 GiB
  • JVM Heap for CDC: Defaults to 7% of system memory; override with JVM_HEAP_SIZE env var.
  • Trade-off: Allocating more to compute improves query throughput and reduces spilling. Allocating more to compactor reduces compaction lag. The memory manager target should always leave ~2-5% headroom below the compute allocation.

Reasoning

The compute node runs stream and batch executors that maintain in-memory caches, hash tables for joins, and aggregation state. It is the most memory-hungry component by far. The frontend primarily holds query plans and session state, requiring less memory. The compactor reads and writes SSTables during compaction, needing enough memory to buffer sort/merge operations.

The memory manager target is set slightly below the compute allocation because the compute process also uses memory for JVM (when embedded connectors are active), OS page cache, and gRPC buffers. Setting the target equal to the allocation would cause the memory manager to use all available memory, leaving nothing for these overheads and causing OOM kills.

The 7% JVM heap default for CDC connectors is based on the observation that CDC workloads are I/O-bound rather than memory-bound. The JVM primarily holds Debezium engine state and change event buffers, which are small relative to the streaming state managed by the Rust compute node.

Code evidence from docker/README.md lines 36-42:

| Docker Container Memory | 8 GiB | 14 GiB | 28 GiB | 58 GiB |
| compute-opts.total-memory-bytes | 6 GiB | 10 GiB | 20 GiB | 46 GiB |
| frontend-opts.frontend-total-memory-bytes | 1 GiB | 2 GiB | 4 GiB | 6 GiB |
| compactor-opts.compactor-total-memory-bytes | 1 GiB | 2 GiB | 4 GiB | 6 GiB |

Code evidence from docker/docker-compose.yml lines 78-81:

deploy:
  resources:
    limits:
      memory: 28G
    reservations:
      memory: 28G

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment