Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:DevExpress Testcafe Programmatic Test Run Execution

From Leeroopedia
Knowledge Sources
Domains Testing, Web_Automation
Last Updated 2026-02-12 04:00 GMT

Overview

Programmatic Test Run Execution is the concept of orchestrating the complete test execution lifecycle from compiling tests and launching browsers to distributing tests across browser instances and collecting results.

Description

Test execution is a complex multi-phase process that coordinates numerous asynchronous operations: compiling test files, resolving browser connections, launching browser instances, distributing tests across browsers, executing test code in browser contexts, collecting results, generating reports, and cleaning up resources.

Programmatic Test Run Execution encapsulates this complexity behind a single asynchronous method that accepts runtime options and returns a promise resolving to the test results. This orchestration involves multiple subsystems working in concert: the Bootstrapper compiles tests and resolves browsers, the Task creates browser jobs and distributes tests, the BrowserJob manages test execution within a single browser instance, and the TestRunController handles individual test lifecycle events.

The execution flow is event-driven, using a message bus to coordinate between components. Tests begin when browsers connect, progress through setup/test/teardown phases, emit events for each action, and complete when all tests finish or an error occurs. The system handles edge cases like browser disconnections, test timeouts, quarantine mode for flaky tests, and stop-on-first-fail early termination.

Usage

Use Programmatic Test Run Execution after configuring a runner with test sources, browsers, and reporters. Call the run method with runtime options that can override or supplement configuration options. The method returns a promise that resolves with the count of failed tests, enabling programmatic handling of test results.

This approach is essential when integrating TestCafe into CI/CD pipelines where test results must determine build success or failure. The programmatic API allows custom logic before and after test execution, conditional test runs based on environment state, and integration with other testing or reporting tools.

Avoid using the programmatic API when the CLI provides sufficient flexibility. The CLI is simpler for standard use cases and handles many edge cases automatically. Use programmatic execution when you need fine-grained control over the execution lifecycle or integration with custom tooling.

Theoretical Basis

The Orchestration Pattern underlies Programmatic Test Run Execution, coordinating multiple independent subsystems to accomplish a complex workflow. This pattern differs from choreography (where each component knows how to react to events) by using a central coordinator (the Runner and Task) to explicitly manage the workflow.

Execution Phases

  1. Preparation Phase: Validate options, merge configuration, initialize reporters
  2. Bootstrapping Phase: Compile test files, resolve browser connections, load client scripts
  3. Distribution Phase: Create browser jobs, assign tests to browsers based on concurrency
  4. Execution Phase: Launch browsers, connect via gateway, execute tests, emit events
  5. Collection Phase: Gather results from all browser jobs, aggregate failures, generate reports
  6. Cleanup Phase: Close browsers, dispose connections, flush reporters, release resources

Pseudocode

async function executeTests(runner, options) {
    // Phase 1: Preparation
    await validateOptions(options)
    reporters = await initializeReporters(runner.config)

    // Phase 2: Bootstrapping
    tests = await compileTestFiles(runner.config.sources)
    browsers = await resolveBrowserConnections(runner.config.browsers)

    // Phase 3: Distribution
    browserJobs = createBrowserJobs(tests, browsers, runner.config.concurrency)

    // Phase 4: Execution
    task = new Task(browserJobs)
    messageBus = new MessageBus()

    // Connect event handlers
    messageBus.on('test-run-start', test => reporters.forEach(r => r.reportTestStart(test)))
    messageBus.on('test-run-done', test => reporters.forEach(r => r.reportTestDone(test)))
    messageBus.on('done', () => reporters.forEach(r => r.reportTaskDone()))

    // Start execution
    await task.start()

    // Wait for completion or error
    await waitForCompletion(task, messageBus)

    // Phase 5: Collection
    results = collectResults(browserJobs)
    failureCount = countFailures(results)

    // Phase 6: Cleanup
    await closeBrowsers(browsers)
    await disposeReporters(reporters)

    return failureCount
}

Runtime Options

  • speed: Test execution speed multiplier (0.01-1.0)
  • selectorTimeout: Maximum time to wait for selector resolution
  • assertionTimeout: Maximum time to wait for assertion conditions
  • pageLoadTimeout: Maximum time to wait for page loads
  • stopOnFirstFail: Halt execution after first test failure
  • quarantineMode: Retry flaky tests with configurable thresholds
  • debugMode: Pause execution for debugging
  • debugOnFail: Pause automatically when tests fail

Related Pages

Implemented By

Related Principles

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment