Environment:Langfuse Langfuse Redis 7 Queue Cache
| Knowledge Sources | |
|---|---|
| Domains | Infrastructure, Queue, Cache |
| Last Updated | 2026-02-14 06:00 GMT |
Overview
Redis 7.2.4 (standalone, sentinel, or cluster mode) powering BullMQ 5.34.10 job queues, API key caching, prompt caching, model match caching, rate limiting, and S3 slowdown tracking.
Description
Redis serves as the backbone for all asynchronous processing in Langfuse via BullMQ queues (25+ queue types), as well as caching for API keys, prompts, model pricing, and eval job configurations. The system supports three deployment modes: standalone, Redis Sentinel (high availability), and Redis Cluster (horizontal scaling with 6+ nodes). Connection management uses ioredis 5.8.2 with auto-pipelining and exponential backoff retry strategy.
Usage
Use this environment for all Langfuse deployments that require background processing (worker), caching, or rate limiting. Redis is mandatory when the worker process is running.
System Requirements
| Category | Requirement | Notes |
|---|---|---|
| Database | Redis 7+ | Development: redis:7.2.4; Production: redis:7 |
| RAM | 1GB+ | Scales with queue depth and cache size |
| Network | TCP port 6379 | Default; cluster uses ports 6370-6375 |
Dependencies
System Packages
redis-server>= 7 (via Docker:redis:7.2.4)
Node.js Packages
bullmq= 5.34.10ioredis= 5.8.2
Credentials
The following environment variables configure Redis access:
REDIS_HOST: Redis hostname (standalone mode)REDIS_PORT: Redis port (default: 6379)REDIS_AUTH: Redis passwordREDIS_USERNAME: Redis username (optional)REDIS_CONNECTION_STRING: Full connection string (alternative to host/port)REDIS_KEY_PREFIX: Key prefix for multi-tenant deployments (optional)
TLS Configuration
REDIS_TLS_ENABLED: Enable TLS (default:false)REDIS_TLS_CA_PATH: CA certificate pathREDIS_TLS_CERT_PATH: Client certificate pathREDIS_TLS_KEY_PATH: Client key path
Cluster Configuration
REDIS_CLUSTER_ENABLED: Enable cluster mode (default:false)REDIS_CLUSTER_NODES: Comma-separated cluster nodes
Sentinel Configuration
REDIS_SENTINEL_ENABLED: Enable Sentinel (default:false)REDIS_SENTINEL_NODES: Comma-separated Sentinel nodesREDIS_SENTINEL_MASTER_NAME: Sentinel master name
Quick Install
# Start Redis via Docker Compose
pnpm run infra:dev:up
# For Redis cluster testing:
docker compose -f docker-compose.dev-redis-cluster.yml up -d
Code Evidence
Retry strategy from packages/shared/src/server/redis/redis.ts:
retryStrategy: (times: number) => {
if (times >= 5) {
logger.warn(`Connection to redis lost. Retry attempt: ${times}`);
}
// Retries forever. Waits at least 1s and at most 20s between retries.
return Math.max(Math.min(Math.exp(times), 20000), 1000);
},
reconnectOnError: (err) => {
return err.message.includes("READONLY") ? 2 : false;
},
Queue prefix for cluster compatibility from packages/shared/src/server/redis/redis.ts:
export const getQueuePrefix = (queueName: string): string | undefined => {
if (env.REDIS_CLUSTER_ENABLED === "true") {
return `{${queueName}}`; // Hash tags for same-slot routing
}
return undefined;
};
Common Errors
| Error Message | Cause | Solution |
|---|---|---|
ECONNREFUSED 127.0.0.1:6379 |
Redis not running | Run pnpm run infra:dev:up
|
READONLY You can't write against a read only replica |
Connected to replica | Auto-reconnects (handled by reconnectOnError)
|
CROSSSLOT Keys in request don't hash to the same slot |
Cluster mode: keys on different slots | Use hash tags {queueName} for related keys
|
OOM command not allowed when used memory > 'maxmemory' |
Redis out of memory | Increase Redis memory or reduce queue backlog |
Compatibility Notes
- Standalone: Default mode; simplest for development and small deployments.
- Sentinel: For high availability without horizontal scaling. Configure with
REDIS_SENTINEL_ENABLED=true. - Cluster: For horizontal scaling. Uses 6+ nodes. Hash tags ensure queue keys co-locate. Configure with
REDIS_CLUSTER_ENABLED=true. - Auto-pipelining: Enabled by default (
REDIS_ENABLE_AUTO_PIPELINING=true); disabled for rate limit service to avoid ioredis issue #1931. - Memory Policy: Must be set to
noevictionto prevent BullMQ data loss.
Related Pages
- Implementation:Langfuse_Langfuse_IngestionQueue
- Implementation:Langfuse_Langfuse_TraceUpsertQueue
- Implementation:Langfuse_Langfuse_CreateEvalQueue
- Implementation:Langfuse_Langfuse_PromptService_Cache
- Implementation:Langfuse_Langfuse_DatasetRunItemUpsertQueue
- Implementation:Langfuse_Langfuse_PublishToOtelIngestionQueue
- Implementation:Langfuse_Langfuse_OtelIngestionQueueProcessor
- Implementation:Langfuse_Langfuse_EvalJobExecutorQueueProcessor