Environment:Langfuse Langfuse S3 Compatible Storage
| Knowledge Sources | |
|---|---|
| Domains | Infrastructure, Storage |
| Last Updated | 2026-02-14 06:00 GMT |
Overview
S3-compatible blob storage (MinIO, AWS S3, Azure Blob Storage, or Google Cloud Storage) for ingestion event files, media uploads, batch exports, and core data exports with configurable multipart upload and rate-limit-aware routing.
Description
Langfuse uses blob storage for four distinct purposes, each with independent configuration: (1) Event Upload stores raw ingestion event payloads for durability before queue processing, (2) Media Upload stores user-uploaded media attachments (images, PDFs, audio), (3) Batch Export stores exported data files for download, and (4) Core Data Export stores scheduled data backups. The StorageService abstraction supports AWS S3, MinIO (development), Azure Blob Storage, and Google Cloud Storage with a unified API.
Usage
Use this environment for all Langfuse deployments. At minimum, the Event Upload bucket is required for ingestion. Development uses MinIO as a local S3-compatible service.
System Requirements
| Category | Requirement | Notes |
|---|---|---|
| Storage | S3-compatible service | MinIO for dev; AWS S3/Azure Blob/GCS for production |
| Disk | Scales with data volume | Event files + media + exports |
| Network | TCP port 9000 (API), 9001 (Console) | MinIO default ports |
Dependencies
System Packages
- MinIO (development):
cgr.dev/chainguard/minioDocker image - Azurite (Azure dev):
mcr.microsoft.com/azure-storage/azurite
Node.js Packages
@aws-sdk/client-s3= 3.675.0@aws-sdk/lib-storage= 3.675.0@aws-sdk/s3-request-presigner= 3.679.0@azure/storage-blob= 12.26.0 (Azure support)@google-cloud/storage= 7.18.0 (GCS support)
Credentials
Event Upload (Required)
LANGFUSE_S3_EVENT_UPLOAD_BUCKET: (Required) S3 bucket nameLANGFUSE_S3_EVENT_UPLOAD_REGION: AWS regionLANGFUSE_S3_EVENT_UPLOAD_ENDPOINT: S3 endpoint URL (for MinIO/custom)LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID: AWS access keyLANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY: AWS secret keyLANGFUSE_S3_EVENT_UPLOAD_FORCE_PATH_STYLE: Force path-style URLs (default:false; settruefor MinIO)
Media Upload (Optional)
LANGFUSE_S3_MEDIA_UPLOAD_BUCKET: S3 bucket nameLANGFUSE_S3_MEDIA_MAX_CONTENT_LENGTH: Max upload size in bytes (default: 1,000,000,000 / ~1GB)LANGFUSE_S3_MEDIA_DOWNLOAD_URL_EXPIRY_SECONDS: Download URL expiry (default: 3600)
Batch Export (Optional)
LANGFUSE_S3_BATCH_EXPORT_ENABLED: Enable batch exports (default:false)LANGFUSE_S3_BATCH_EXPORT_BUCKET: S3 bucket name
Performance Tuning
LANGFUSE_S3_CONCURRENT_WRITES: Max concurrent S3 writes (default: 50)LANGFUSE_S3_CONCURRENT_READS: Max concurrent S3 reads (default: 50)LANGFUSE_S3_LIST_MAX_KEYS: Max keys per S3 list (default: 200)LANGFUSE_S3_RATE_ERROR_SLOWDOWN_ENABLED: Enable S3 rate-limit detection (default:false)
Cloud Provider Alternatives
LANGFUSE_USE_AZURE_BLOB: Use Azure Blob Storage (default:false)LANGFUSE_USE_GOOGLE_CLOUD_STORAGE: Use Google Cloud Storage (default:false)
Quick Install
# Start MinIO via Docker Compose (development)
pnpm run infra:dev:up
# MinIO Console: http://localhost:9001
# Default credentials: minio / miniosecret
# Default bucket: langfuse
Code Evidence
S3 multipart upload configuration from packages/shared/src/server/services/StorageService.ts:
// Default: 5 MB part size supports files up to ~50 GB (5 MB x 10,000 parts)
// For large files, use partSize: 100 * 1024 * 1024 (100 MB) to support up to ~1 TB
const upload = new Upload({
client: s3Client,
params: { Bucket: bucket, Key: fileName, Body: data, ContentType: fileType },
partSize: partSize,
queueSize: queueSize,
});
Azure Blob upload from packages/shared/src/server/services/StorageService.ts:
const bufferSize = partSize ?? 8 * 1024 * 1024; // Default 8MB per block
const maxConcurrency = 5;
await blockBlobClient.uploadStream(data, bufferSize, maxConcurrency, { ... });
Common Errors
| Error Message | Cause | Solution |
|---|---|---|
NoSuchBucket |
Bucket does not exist | Create the bucket via MinIO Console or AWS CLI |
SlowDown |
S3 rate limiting | Enable LANGFUSE_S3_RATE_ERROR_SLOWDOWN_ENABLED for adaptive routing
|
AccessDenied |
Invalid credentials | Check access key and secret key configuration |
ECONNREFUSED on port 9000 |
MinIO not running | Run pnpm run infra:dev:up
|
Compatibility Notes
- MinIO: Used for development. Requires
FORCE_PATH_STYLE=true. Default bucket:langfuse. - AWS S3: Production default. Supports SSE with AES-256 or KMS.
- Azure Blob: Enable with
LANGFUSE_USE_AZURE_BLOB=true. Uses 8MB block size, 5 concurrent uploads. - Google Cloud Storage: Enable with
LANGFUSE_USE_GOOGLE_CLOUD_STORAGE=true. RequiresLANGFUSE_GOOGLE_CLOUD_STORAGE_CREDENTIALS. - Rate Limiting: When S3 returns
SlowDownerrors, affected projects are flagged in Redis with a configurable TTL and routed to a secondary queue.