Jump to content

Connect SuperML | Leeroopedia MCP: Equip your AI agents with best practices, code verification, and debugging knowledge. Powered by Leeroo — building Organizational Superintelligence. Contact us at founders@leeroo.com.

Environment:FlowiseAI Flowise Queue Mode Environment

From Leeroopedia
Knowledge Sources
Domains Infrastructure, Queue, Redis
Last Updated 2026-02-12 07:30 GMT

Overview

Redis-backed BullMQ queue environment for horizontal scaling of Flowise with separate main and worker processes.

Description

When Flowise is deployed in queue mode (`MODE=queue`), the architecture splits into a main process (API server + UI) and one or more worker processes that execute chatflow/agentflow predictions. Redis serves as the message broker via BullMQ. This mode enables horizontal scaling, distributed rate limiting, shared caching (LLM, embeddings, MCP toolkits, SSO tokens), and fault-tolerant execution. The worker process runs on port 5566 by default for healthcheck purposes.

Usage

Use this environment for production deployments requiring horizontal scaling, high-availability setups, or when running Flowise behind a load balancer with multiple instances. Required when using the Docker Compose queue configurations.

System Requirements

Category Requirement Notes
Redis Redis 6.0+ server Used for BullMQ job queue and distributed caching
Hardware 8GB+ RAM across all processes Main + Worker(s) each need 4GB+
Network Port 3000 (main), 5566 (worker), 6379 (Redis) All configurable via environment variables

Dependencies

Node.js Packages

  • `bullmq` = 5.45.2 — Job queue built on Redis
  • `ioredis` — Redis client (transitive dependency of bullmq)

External Services

  • Redis 6.0+ server (can be Redis, AWS ElastiCache, Upstash, etc.)

Credentials

The following environment variables configure queue mode:

Mode Selection:

  • `MODE`: Set to `queue` for main process, or `main` for standalone (default: standalone)

Queue Configuration:

  • `QUEUE_NAME`: BullMQ queue name (default: `flowise-queue`)
  • `QUEUE_REDIS_EVENT_STREAM_MAX_LEN`: Max Redis stream length (default: `100000`)
  • `WORKER_CONCURRENCY`: Maximum concurrent job processing (default: `100000`)
  • `REMOVE_ON_AGE`: Remove completed jobs after N seconds (default: `86400` = 24h)
  • `REMOVE_ON_COUNT`: Remove completed jobs after N count (default: `10000`)

Redis Connection:

  • `REDIS_URL`: Full Redis connection URL (overrides individual settings)
  • `REDIS_HOST`: Redis hostname (default: `localhost`)
  • `REDIS_PORT`: Redis port (default: `6379`)
  • `REDIS_USERNAME`: Redis username (if ACL enabled)
  • `REDIS_PASSWORD`: Redis password
  • `REDIS_TLS`: Enable TLS (default: `false`)
  • `REDIS_CERT`: TLS certificate (base64)
  • `REDIS_KEY`: TLS private key (base64)
  • `REDIS_CA`: TLS CA certificate (base64)
  • `REDIS_KEEP_ALIVE`: Keep-alive interval in milliseconds
  • `ENABLE_BULLMQ_DASHBOARD`: Enable BullMQ monitoring dashboard

Worker:

  • `WORKER_PORT`: Worker healthcheck port (default: `5566`)

Quick Install

# Start Redis
docker run -d --name redis -p 6379:6379 redis:7-alpine

# Start Flowise main process
export MODE=queue
export REDIS_HOST=localhost
export REDIS_PORT=6379
pnpm start

# Start Flowise worker (in separate terminal)
pnpm start-worker

Code Evidence

Queue mode conditional from `packages/server/src/utils/rateLimit.ts:25`:

if (process.env.MODE === MODE.QUEUE) {
    // Use Redis-backed rate limiter store
    this.redisClient = new Redis(/* ... */)
}

Redis caching for SSO tokens from `packages/server/src/CachePool.ts:55`:

await this.redisClient.set(`ssoTokenCache:${ssoToken}`, serializedValue, 'EX', 120)
// Expires after 120 seconds (2 minutes)

Queue mode prediction routing from `packages/server/src/controllers/predictions/index.ts:75`:

if (process.env.MODE === MODE.QUEUE) {
    // Route prediction to worker via BullMQ queue
}

Redis event publisher reconnection from `packages/server/src/queue/RedisEventPublisher.ts:67`:

logger.warn(`[RedisEventPublisher] Redis client connection ended`)

Common Errors

Error Message Cause Solution
`ECONNREFUSED` to Redis Redis server not running or unreachable Start Redis; verify REDIS_HOST and REDIS_PORT
`[RedisEventPublisher] Redis client connection ended` Redis connection dropped Check Redis server health; verify network stability; Redis auto-reconnects
Worker not processing jobs Worker not started or wrong REDIS_URL Ensure worker process is running with `pnpm start-worker` and same Redis connection
`SQLITE_BUSY` in queue mode Using SQLite with multiple processes Switch to PostgreSQL or MySQL for queue mode deployments

Compatibility Notes

  • SQLite: Not supported in queue mode due to concurrent write limitations; use PostgreSQL or MySQL
  • Redis Cluster: BullMQ supports Redis Cluster; configure via REDIS_URL with cluster endpoints
  • TLS: For managed Redis services (AWS ElastiCache, Azure Cache), enable REDIS_TLS and provide certificates
  • Horizontal scaling: Multiple worker instances can connect to the same Redis and queue; the main process distributes work automatically

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment