Implementation:TobikoData Sqlmesh Context Test
| Knowledge Sources | |
|---|---|
| Domains | Data_Engineering, Testing |
| Last Updated | 2026-02-07 00:00 GMT |
Overview
Concrete test execution method for running unit tests against SQL models, provided by the Context class.
Description
The Context.test method discovers and executes unit tests defined for SQL models in a SQLMesh project. Tests are defined in YAML or CSV files specifying input fixtures and expected outputs. The method runs tests in an isolated environment (typically in-memory DuckDB), compares actual results against expectations, and returns detailed test results including pass/fail status and execution metrics.
Tests execute quickly by using lightweight database engines and small fixture datasets, providing rapid feedback during development. The method supports filtering tests by patterns, controlling output verbosity, and preserving fixture tables for debugging. Test results are automatically logged to the console with color-coded pass/fail indicators.
Usage
Call this method before creating deployment plans to ensure model logic correctness. Use it during development for rapid iteration on model transformations. Integrate it into CI/CD pipelines as a quality gate. Run with verbosity for debugging failing tests, and use match_patterns to run specific test subsets during focused development.
Code Reference
Source Location
- Repository: sqlmesh
- File: sqlmesh/core/context.py
- Class: Context
- Method: test (lines 2234-2268)
Signature
def test(
self,
match_patterns: t.Optional[t.List[str]] = None,
tests: t.Optional[t.List[str]] = None,
verbosity: Verbosity = Verbosity.DEFAULT,
preserve_fixtures: bool = False,
stream: t.Optional[t.TextIO] = None,
) -> ModelTextTestResult:
Import
from sqlmesh.core.context import Context
context = Context(paths="project")
result = context.test()
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| match_patterns | t.List[str] | No | Glob patterns to filter which tests to run (e.g., ["*orders*"]) |
| tests | t.List[str] | No | Specific test names to execute |
| verbosity | Verbosity | No | Output detail level (DEFAULT, VERBOSE), defaults to DEFAULT |
| preserve_fixtures | bool | No | Keep fixture tables after tests for debugging, defaults to False |
| stream | t.TextIO | No | Output stream for test results, defaults to stdout |
Outputs
| Name | Type | Description |
|---|---|---|
| result | ModelTextTestResult | Test results object containing pass/fail counts, failures, and errors |
Usage Examples
Basic Usage
from sqlmesh.core.context import Context
context = Context(paths="examples/sushi")
# Run all tests
result = context.test()
# Check if tests passed
if result.failures or result.errors:
print(f"Tests failed: {len(result.failures)} failures, {len(result.errors)} errors")
else:
print(f"All {result.testsRun} tests passed!")
Filtered Testing
from sqlmesh.core.context import Context
context = Context(paths="project")
# Run tests matching pattern
result = context.test(match_patterns=["*orders*", "*customers*"])
# Run specific tests by name
result = context.test(tests=["test_orders_aggregation", "test_customer_dedup"])
Debugging Tests
from sqlmesh.core.context import Context
from sqlmesh.core.test import Verbosity
context = Context(paths="project")
# Verbose output with fixture preservation
result = context.test(
match_patterns=["*failing_model*"],
verbosity=Verbosity.VERBOSE,
preserve_fixtures=True
)
# Examine failures
for failure in result.failures:
print(f"Test: {failure[0]}")
print(f"Error: {failure[1]}")
CI/CD Integration
from sqlmesh.core.context import Context
import sys
context = Context(paths="/workspace/project")
# Run tests with explicit failure handling
result = context.test()
if result.wasSuccessful():
print("All tests passed")
sys.exit(0)
else:
print(f"Test failures: {len(result.failures)}")
print(f"Test errors: {len(result.errors)}")
sys.exit(1)