Principle:Duckdb Duckdb Build Verification
Overview
Validating build correctness through automated test execution is a critical quality gate in the DuckDB development workflow. This principle ensures that every build produces a functionally correct binary before it is deployed, released, or merged into the main branch.
Description
Build verification is the practice of systematically exercising compiled artifacts to confirm they behave as expected. In the DuckDB project, this takes several forms:
Types of Verification
| Verification Type | Scope | When Used |
|---|---|---|
| Unit tests | Individual functions, operators, and components | Every commit, every pull request |
| Integration tests | End-to-end SQL query execution and result correctness | Every commit, every pull request |
| Release tests | Full test suite under release build configuration with optimizations enabled | Before tagged releases |
| SQL logic tests | Large corpus of SQL statements validated against expected output | Continuous integration |
Why Build Verification Matters
Without automated verification, subtle regressions can be introduced by:
- Compiler optimization changes between debug and release builds
- Platform-specific behavior differences (Linux, macOS, Windows)
- Interaction effects between modules that are individually correct but produce incorrect results in combination
- Unity build grouping changes that alter compilation unit boundaries
Build verification catches these issues before they reach users.
Verification Coverage
DuckDB's test suite covers:
- Parser correctness -- SQL statements are parsed into the expected AST structures
- Planner correctness -- logical plans are generated correctly for various query patterns
- Optimizer correctness -- optimization passes preserve query semantics
- Execution correctness -- physical operators produce correct results for all data types
- Storage correctness -- data persists and is retrieved accurately across sessions
- Transaction correctness -- MVCC isolation guarantees hold under concurrent access
Usage
This principle applies after the build has completed and executables have been produced:
- Build the core library and test runner executable.
- Execute the test suite against the built binary.
- Analyze test results: all tests must pass for the build to be considered verified.
- Optionally run with different build configurations (debug, release, sanitizers) for broader coverage.
In continuous integration, this principle is enforced automatically -- every pull request must pass the full test suite before merging.
Theoretical Basis
Test-Driven Verification
Build verification follows the principle that software correctness must be demonstrated, not assumed. By running a comprehensive suite of automated tests after every build, the project maintains a high confidence level that:
- New code does not break existing functionality (regression detection).
- The build configuration itself is correct (compiler flags, link order, dependency versions).
- Platform-specific behavior is validated on each target platform.
Continuous Integration Validation
In a CI/CD pipeline, build verification serves as the quality gate between code change and code deployment:
- A developer pushes a commit or opens a pull request.
- The CI system builds the project in one or more configurations.
- The test suite is executed against each build.
- Only if all tests pass in all configurations is the change considered safe to merge.
This ensures the main branch always contains verified, buildable, and functionally correct code.
Defense in Depth
Running tests at multiple levels (unit, integration, end-to-end) and in multiple configurations (debug, release, sanitized) provides defense in depth against different classes of bugs:
- Debug builds with assertions catch logic errors and invariant violations.
- Release builds catch optimization-related bugs and undefined behavior.
- Sanitizer builds (ASan, UBSan, TSan) catch memory errors, undefined behavior, and data races.