Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:DevExpress Testcafe Execution Parameter Tuning

From Leeroopedia
Knowledge Sources
Domains Testing, CI_CD, Web_Automation
Last Updated 2026-02-12 04:00 GMT

Overview

Execution Parameter Tuning is the optimization of test execution behavior through configurable timeouts, concurrency, retry strategies, and speed controls for different environments.

Description

Test execution behavior must adapt to different environments: local development may prioritize visibility and debugging, while CI prioritizes speed and reliability. Execution Parameter Tuning provides a comprehensive configuration system to control every aspect of test execution, from network timeouts to parallelization strategies.

Key configuration dimensions include:

  • Concurrency: Number of parallel browser instances to optimize execution time
  • Timeouts: Fine-grained control over selector, assertion, page load, and overall test timeouts
  • Retry Logic: Quarantine mode for handling flaky tests with configurable attempt limits
  • Fail-Fast: Stop entire test run on first failure to save CI time
  • Speed Control: Slow down test execution for debugging or visual verification
  • Application Lifecycle: Start and stop tested application automatically
  • Browser Behavior: Control page caching, reload behavior, and JavaScript error handling
  • Configuration Sources: Merge settings from config files, CLI flags, and programmatic API

Usage

Use Execution Parameter Tuning when:

  • Optimizing CI pipeline execution time through parallelization
  • Handling flaky tests with retry strategies (quarantine mode)
  • Debugging tests by slowing execution speed
  • Running tests in different environments (local, CI, staging, production)
  • Managing long-running tests with appropriate timeout values
  • Starting tested application as part of test lifecycle
  • Implementing fail-fast behavior for rapid feedback

Theoretical Basis

Core Concept: Test execution behavior is a function of environment constraints and testing goals. Optimal parameters balance speed, reliability, and resource utilization through hierarchical configuration merging.

Configuration Hierarchy:

// Configuration sources (in order of precedence, highest to lowest)
CONFIGURATION_SOURCES = [
    CLI_FLAGS,           // Highest priority
    PROGRAMMATIC_API,    // runner.run(options)
    CONFIG_FILE,         // .testcaferc.js/.json
    DEFAULTS             // Built-in defaults
]

FUNCTION resolveConfiguration(sources):
    config = DEFAULTS

    FOR EACH source IN [CONFIG_FILE, PROGRAMMATIC_API, CLI_FLAGS]:
        IF source.exists():
            config = MERGE(config, source.values)
            TRACK_OVERRIDES(source.values)

    RETURN config

Concurrency Model:

// Parallel execution calculation
FUNCTION calculateExecutionTime(tests, concurrency, avgTestTime):
    IF concurrency == 1:
        RETURN tests.length * avgTestTime  // Serial execution

    // Parallel execution with N browser instances
    batches = CEILING(tests.length / concurrency)
    RETURN batches * avgTestTime

// Example: 100 tests, 5 second avg, concurrency=1 vs 10
serial_time = 100 * 5 = 500 seconds (8.3 minutes)
parallel_time = CEILING(100/10) * 5 = 50 seconds

// But consider overhead:
actual_parallel_time = parallel_time + (browser_startup * concurrency)

Timeout Strategy:

// Hierarchical timeout application
TIMEOUTS = {
    selector: 10000,      // Wait for DOM elements
    assertion: 3000,      // Retry assertions
    pageLoad: 3000,       // Wait for page navigation
    pageRequest: 60000,   // Network request timeout
    testExecution: null,  // Per-test timeout (null = unlimited)
    runExecution: null    // Entire run timeout (null = unlimited)
}

FUNCTION waitForSelector(selector, customTimeout = null):
    timeout = customTimeout || TIMEOUTS.selector
    startTime = NOW()

    WHILE NOW() - startTime < timeout:
        element = QUERY_DOM(selector)
        IF element.exists():
            RETURN element

        WAIT(50)  // Poll interval

    THROW TimeoutError("Selector not found: " + selector)

FUNCTION executeAssertion(condition, customTimeout = null):
    timeout = customTimeout || TIMEOUTS.assertion
    startTime = NOW()

    WHILE NOW() - startTime < timeout:
        IF condition():
            RETURN true

        WAIT(50)  // Retry interval

    THROW AssertionError("Condition not met within timeout")

Quarantine Mode (Flaky Test Handling):

// Automatic retry for unstable tests
CONFIG QuarantineMode {
    enabled: boolean,
    attemptLimit: number,       // Max retry attempts (default: 5)
    successThreshold: number    // Required consecutive successes (default: 2)
}

FUNCTION runTestWithQuarantine(test, config):
    IF NOT config.quarantineMode.enabled:
        RETURN runTest(test)  // Normal execution

    attempts = 0
    consecutiveSuccesses = 0
    failures = []

    WHILE attempts < config.attemptLimit:
        attempts++
        result = runTest(test)

        IF result.passed:
            consecutiveSuccesses++
            IF consecutiveSuccesses >= config.successThreshold:
                RETURN { passed: true, quarantined: true, attempts }
        ELSE:
            consecutiveSuccesses = 0
            failures.append(result.error)

    // Failed quarantine
    RETURN { passed: false, quarantined: true, attempts, failures }

Speed Control:

// Speed factor affects action delays
SPEED_FACTOR = 1.0  // Range: 0.01 (very slow) to 1.0 (normal)

FUNCTION performAction(action, speed = SPEED_FACTOR):
    // Execute action
    action.execute()

    // Add delay based on action type and speed factor
    baseDelay = ACTION_DELAYS[action.type]  // e.g., 100ms for click
    actualDelay = baseDelay * (1 / speed)

    WAIT(actualDelay)

// Examples:
speed = 1.0  -> no extra delay (normal speed)
speed = 0.5  -> 2x slower (delays doubled)
speed = 0.1  -> 10x slower (good for debugging)

Fail-Fast Implementation:

CONFIG stopOnFirstFail = true

FUNCTION runTestSuite(tests, config):
    results = []

    FOR EACH test IN tests:
        result = runTest(test)
        results.append(result)

        IF config.stopOnFirstFail AND NOT result.passed:
            LOG("Stopping test run due to failure")
            CANCEL_REMAINING_TESTS()
            BREAK

    RETURN results

Related Pages

Implemented By

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment