Workflow:Duckdb Duckdb Extension Development And Distribution
| Knowledge Sources | |
|---|---|
| Domains | Database_Engineering, Extension_Systems, CI_CD |
| Last Updated | 2026-02-07 11:00 GMT |
Overview
End-to-end process for developing, building, signing, and distributing DuckDB extensions that add functionality to the database system.
Description
This workflow covers the complete lifecycle of a DuckDB extension from development through distribution. DuckDB extensions are shared libraries that provide additional functionality (e.g., file format support, database connectors, analytical functions) separate from the core codebase. Extensions can be statically linked into DuckDB binaries or dynamically loaded at runtime. The distribution pipeline includes RSA signing for authenticity verification, gzip compression for efficient transfer, and upload to an S3-based repository organized by version and platform architecture. Extensions support native platforms (Linux, macOS, Windows across x86_64 and ARM64) as well as WebAssembly.
Usage
Execute this workflow when developing a new DuckDB extension, updating an existing extension for a new DuckDB version, or setting up CI/CD for extension distribution. This applies to in-tree extensions (in the main repository), DuckDB-managed out-of-tree extensions, and external third-party extensions.
Execution Steps
Step 1: Configure Extension Build
Register the extension in the DuckDB build system using a CMake configuration file. Use the duckdb_extension_load function to specify the extension name, source location, and linking strategy. Extensions can be loaded from the local extension/ directory, a custom path, or directly from a GitHub repository URL.
Key considerations:
- In-tree extensions live in the extension/ directory
- Out-of-tree extensions can be placed in extension_external/
- The DONT_LINK parameter builds only the loadable binary without static linking
- Extensions from GitHub are downloaded to the cmake build directory
- VCPKG integration handles external library dependencies
Step 2: Build Extension
Compile the extension using the DuckDB build system. The extension is built against the DuckDB core library headers and produces either a statically-linkable object or a dynamically-loadable .duckdb_extension binary. Multiple extensions can be built simultaneously, with dependency resolution handled by CMake.
Key considerations:
- Extensions must use the DuckDB root CMakeLists.txt as the root CMake file
- The extension template provides a standard project structure
- DUCKDB_EXTENSIONS environment variable accepts a semicolon-separated list
- Individual extensions can be enabled with BUILD_<NAME>=1
Step 3: Sign Extension Binary
Apply a cryptographic signature to the extension binary for authenticity verification. The signing process computes a SHA-256 hash of the extension binary (split into segments for large files) and creates an RSA signature using a private key. The 256-byte signature is appended to the extension binary.
Key considerations:
- The signing private key is provided via DUCKDB_EXTENSION_SIGNING_PK environment variable
- Without a signing key, 256 zero bytes are appended as a placeholder
- The compute-extension-hash.sh script handles large file hashing via segmented SHA-256
- DuckDB verifies signatures at load time using the corresponding public key
Step 4: Compress And Upload
Compress the signed extension binary with gzip and upload it to an S3 bucket organized by version and platform architecture. The upload creates entries for both the specific version (immutable) and optionally the latest pointer (mutable). WebAssembly extensions use a variant pipeline with Brotli compression and WASM-specific signature embedding.
Key considerations:
- S3 path structure: s3://bucket/duckdb_version/platform/name.duckdb_extension.gz
- The copy_to_latest flag determines if this becomes the latest available version
- The copy_to_versioned flag creates an immutable versioned entry
- WASM extensions are uploaded without .gz suffix (browser handles decompression)
- Batch upload is available via extension-upload-all.sh
Step 5: Test Extension Loading
Verify that the uploaded extension can be installed, loaded, and passes autoloading tests. The test pipeline installs the extension from the repository, loads it into a DuckDB instance, and runs extension-specific validation tests to confirm correct functionality.
Key considerations:
- extension-upload-test.sh validates all built extensions
- Tests verify INSTALL, LOAD, and autoloading behavior
- Extension metadata tests validate version, platform, and signature information
- The run_extension_metadata_tests.sh script generates test data directories
Step 6: Promote To Production
For release builds, promote extension binaries from the nightly S3 bucket to the production S3 bucket. This involves copying extensions across buckets while maintaining the version/platform directory structure. The promotion script handles mapping between nightly and release version identifiers.
Key considerations:
- Nightly bucket: duckdb-extensions-nightly
- Production bucket: duckdb-extensions
- extension-upload-from-nightly.sh handles the promotion
- Version identifiers can be git tags (v0.8.1) or git commit hashes
- Extensions are served via extensions.duckdb.org