Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Langgenius Dify Frontend Container Runtime

From Leeroopedia


Knowledge Sources
Domains Frontend Docker Process Management
Last Updated 2026-02-08 00:00 GMT

Overview

Frontend Container Runtime is the principle of injecting runtime configuration into statically-built frontend applications within containers, bridging the gap between build-time asset compilation and runtime environment variability through entrypoint scripts and process management.

Description

Next.js applications present a unique configuration challenge in containerized deployments. During next build, environment variables prefixed with NEXT_PUBLIC_ are inlined into the JavaScript bundles as static strings. This means the built artifact is permanently bound to whatever values were present at build time. However, Docker deployments need the same image to work across different environments (staging, production) with different API URLs, feature flags, and configuration values.

The runtime environment injection pattern solves this by deferring environment variable resolution to container startup time rather than build time. This is accomplished through a Docker entrypoint script that:

  1. Maps external environment variables to NEXT_PUBLIC_ variables: Docker Compose passes variables like CONSOLE_API_URL and APP_API_URL into the container. The entrypoint script transforms these into the NEXT_PUBLIC_ prefixed variables that Next.js reads at runtime, often appending path suffixes (e.g., /console/api, /api) to base URLs.
  1. Sets default values for optional flags: Feature flags and optional configuration receive sensible defaults via Bash parameter expansion (${VAR:-default}), ensuring the application starts correctly even when optional variables are omitted.
  1. Launches the application with a process manager: Rather than running a single Node.js process, the entrypoint uses PM2 in cluster mode to spawn multiple worker processes, improving throughput and resilience on multi-core containers.

This pattern maintains the single-image, multiple-environment deployment model that Docker encourages, while respecting the Next.js framework's build-time optimization architecture.

The build-time vs. runtime configuration distinction:

Aspect Build-time (next build) Runtime (entrypoint.sh)
When resolved During Docker image build During container startup
Scope Baked into JS bundles Available to SSR and CSR
Changeability Requires image rebuild Changed per deployment via .env
Variable prefix NEXT_PUBLIC_* at build Mapped from plain names to NEXT_PUBLIC_*
Use case Static assets, CDN-cached pages API endpoints, feature flags, per-environment config

Process management with PM2:

PM2 operates in cluster mode, which uses Node.js's built-in cluster module to fork multiple worker processes that share the same server port. This provides:

  • Horizontal scaling within a single container: the PM2_INSTANCES variable controls the number of workers (default: 2)
  • Zero-downtime restarts: PM2 can gracefully reload workers one at a time
  • Automatic crash recovery: if a worker dies, PM2 restarts it immediately
  • Memory monitoring: PM2 can restart workers that exceed memory thresholds

Usage

Apply this principle whenever:

  • Deploying the Dify web frontend in a Docker container
  • Understanding how API endpoint URLs are resolved at runtime
  • Debugging configuration issues where the frontend cannot reach the backend API
  • Scaling the frontend service to handle more concurrent users
  • Customizing feature flags that control UI behavior in different environments

Theoretical Basis

The runtime injection pattern can be generalized as an adapter layer between the container's external configuration interface (environment variables from Docker Compose) and the application's internal configuration interface (NEXT_PUBLIC_* variables with URL path suffixes).

Runtime Environment Injection Flow:

Docker Compose .env          Entrypoint Script              Next.js Application
-------------------          -----------------              -------------------
CONSOLE_API_URL=https://..   NEXT_PUBLIC_API_PREFIX=         process.env.NEXT_PUBLIC_API_PREFIX
                             ${CONSOLE_API_URL}/console/api  = "https://.../console/api"

APP_API_URL=https://..       NEXT_PUBLIC_PUBLIC_API_PREFIX=  process.env.NEXT_PUBLIC_PUBLIC_API_PREFIX
                             ${APP_API_URL}/api              = "https://.../api"

MARKETPLACE_API_URL=https:// NEXT_PUBLIC_MARKETPLACE_API_    process.env.NEXT_PUBLIC_MARKETPLACE_API_PREFIX
                             PREFIX=${..}/api/v1             = "https://.../api/v1"

SENTRY_DSN=https://..        NEXT_PUBLIC_SENTRY_DSN=         process.env.NEXT_PUBLIC_SENTRY_DSN
                             ${SENTRY_DSN}                   = "https://..."

DEPLOY_ENV=PRODUCTION        NEXT_PUBLIC_DEPLOY_ENV=         process.env.NEXT_PUBLIC_DEPLOY_ENV
                             ${DEPLOY_ENV}                   = "PRODUCTION"

URL suffix convention:

The entrypoint script appends specific API path prefixes to base URLs, following a convention where the base URL is the domain and the suffix identifies the API subsystem:

Base Variable Suffix Result Variable Purpose
CONSOLE_API_URL /console/api NEXT_PUBLIC_API_PREFIX Console backend API
APP_API_URL /api NEXT_PUBLIC_PUBLIC_API_PREFIX Public-facing application API
MARKETPLACE_API_URL /api/v1 NEXT_PUBLIC_MARKETPLACE_API_PREFIX Plugin marketplace API

Process model pseudocode:

# Container entrypoint lifecycle
entrypoint() {
    # Phase 1: Environment mapping
    map_external_vars_to_next_public()

    # Phase 2: Set defaults for optional flags
    apply_defaults()

    # Phase 3: Launch PM2 cluster
    pm2 start server.js \
        --name dify-web \
        --exec-mode cluster \
        --instances $PM2_INSTANCES \
        --no-daemon  # Keep container running (PID 1)
}

The --no-daemon flag is critical in Docker: it keeps PM2 as the foreground process (PID 1), which means Docker can monitor the process health and the container stays alive as long as PM2 is running.

Related Pages

Implemented By

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment