Workflow:Langgenius Dify Docker Deployment
| Knowledge Sources | |
|---|---|
| Domains | DevOps, Deployment, Docker |
| Last Updated | 2026-02-12 07:00 GMT |
Overview
End-to-end process for deploying the Dify platform as a self-hosted instance using Docker Compose, from environment configuration through service orchestration to production readiness.
Description
This workflow covers the complete deployment of Dify's multi-service architecture using Docker Compose. The platform consists of an API server (Flask/Gunicorn), a Celery worker for async task processing, a Next.js web frontend, PostgreSQL or MySQL for persistence, Redis for caching and message brokering, a configurable vector database (Weaviate, Milvus, Qdrant, etc.), an Nginx reverse proxy, a plugin daemon, and a sandboxed code execution environment. The deployment uses a template-based Compose configuration with profile-based service selection, enabling operators to choose their preferred vector database backend via a single environment variable.
Usage
Execute this workflow when you need to stand up a self-hosted Dify instance on any machine meeting minimum requirements (2 CPU cores, 4 GiB RAM) with Docker and Docker Compose installed. This is also the workflow to follow when upgrading an existing deployment, as it includes environment variable synchronization that preserves custom configuration while adding newly introduced variables.
Execution Steps
Step 1: Prepare Environment Configuration
Copy the example environment file to create your deployment configuration. The example file contains all available variables with sensible defaults. Customize key settings such as database credentials, secret keys, API URLs, and the choice of vector database backend.
Key considerations:
- The
.envfile is mandatory for deployment - Set
SECRET_KEYto a unique random value for security - Choose
VECTOR_STOREto select your preferred vector database (weaviate, milvus, qdrant, pgvector, etc.) - Configure
DB_TYPEfor your preferred relational database (postgresql or mysql) - Set external URLs (
CONSOLE_API_URL,APP_API_URL,APP_WEB_URL) to match your deployment domain
Step 2: Select and Configure Vector Database
Choose a vector database backend by setting the VECTOR_STORE environment variable. Docker Compose profiles automatically load the corresponding service. Configure the vector-database-specific connection variables (endpoint, API key, credentials) as needed.
Available backends:
- Weaviate (default), Milvus, Qdrant, pgvector, OpenSearch, Chroma, Elasticsearch, and others
- Each backend has its own set of connection variables in the environment file
- Profile-based loading means only the selected vector database container starts
Step 3: Initialize Permissions and Volumes
On first launch, an init container sets correct file permissions on shared storage volumes. Docker Compose manages volume creation for persistent data including database files, Redis data, vector database indexes, application file storage, plugin storage, and sandbox dependencies.
Volume structure:
- Database data, Redis data, vector database indexes
- Application file storage for uploads
- Plugin daemon storage
- Sandbox dependencies and configuration
- Optional SSL certificate storage
Step 4: Launch Services with Docker Compose
Start all services using docker compose up. Docker Compose orchestrates startup order using health checks and dependency declarations. The API server and Celery workers wait for database and Redis health checks to pass before starting. The Nginx reverse proxy waits for the API and web services.
Startup order:
- Init permissions container runs first
- Database and Redis services start and pass health checks
- API server, Celery worker, and beat scheduler start
- Plugin daemon initializes
- Web frontend starts
- Nginx reverse proxy starts last, after API and web are ready
Step 5: Complete Initial Setup
Access the Dify dashboard at the configured URL and complete the initialization wizard. This creates the first admin account and configures basic workspace settings. The system is now ready for application development.
Post-deployment tasks:
- Create the initial admin account via the setup wizard
- Configure model provider credentials (OpenAI, Anthropic, etc.)
- Optionally enable SSL with certbot for HTTPS
- Optionally enable OpenTelemetry for monitoring
Step 6: Upgrade Existing Deployment
When upgrading to a new version, use the environment synchronization script to merge new variables into the existing configuration without overwriting custom values. The script creates timestamped backups before making changes, adds newly introduced variables, and reports removed variables.
Upgrade procedure:
- Run the env-sync script to update environment variables
- Pull new Docker images
- Restart services with
docker compose up -d - Database migrations run automatically on API startup when
MIGRATION_ENABLED=true