Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:Microsoft Playwright Test Execution and Reporting

From Leeroopedia
Knowledge Sources
Domains Testing, Browser_Automation
Last Updated 2026-02-11 00:00 GMT

Overview

Test execution and reporting is the process of orchestrating the running of test suites with parallelism, sharding, and retry strategies, then generating structured reports that communicate results to both humans and automated systems.

Description

Writing tests is only half of end-to-end testing; the other half is executing them efficiently and communicating results clearly. A test execution engine must solve several challenges simultaneously: running tests fast enough to maintain developer productivity, isolating tests so they do not interfere with each other, handling failures gracefully through retries, and producing output that is useful for debugging failures and tracking quality over time.

Parallelism is the primary mechanism for execution speed. Tests that are independent of each other can run simultaneously across multiple worker processes. The execution engine distributes test files across workers and manages the lifecycle of each worker (startup, test execution, teardown). The degree of parallelism is configurable, allowing teams to balance speed against resource constraints.

Sharding extends parallelism beyond a single machine. Large test suites can be split into shards that run on separate CI machines, with results aggregated after all shards complete. The sharding algorithm must distribute tests evenly to minimize total wall-clock time.

Retries provide resilience against flaky tests -- tests that intermittently fail due to timing issues, network instability, or other non-deterministic factors. The retry strategy should be configurable (number of retries, which tests to retry) and the results should clearly distinguish between tests that passed on first attempt, tests that passed after retry, and tests that failed all attempts.

Reporting transforms raw execution data into actionable information. Different consumers need different formats: developers need detailed failure output with stack traces and screenshots during local development, CI systems need structured formats (JUnit XML, JSON) for dashboard integration, and team leads need summary reports (HTML) for trend analysis.

Usage

Test execution is performed during local development (typically in headed/debug mode with a subset of tests), in CI pipelines (headless mode with full suite, retries, and machine-readable reporters), and during pre-merge checks (focused on changed areas with faster feedback). Reporting configuration should match the execution context: verbose reporters for local development, structured reporters for CI, and HTML reporters for human review of CI results.

Theoretical Basis

The test execution and reporting process follows a multi-phase pipeline:

Phase 1 -- Discovery: The runner scans the configured test directory for files matching the test pattern (e.g., **/*.spec.ts). Each file is parsed to extract test definitions, including their titles, annotations, and grouping structure. Tests are filtered based on CLI arguments (grep patterns, project selection, tag filters).

Phase 2 -- Planning: The discovered tests are organized into an execution plan. The planner considers:

  • Parallelism: How many workers to spawn, and which tests can run in parallel.
  • Sharding: If sharding is enabled, which subset of tests this shard is responsible for.
  • Dependencies: If tests have declared dependencies (e.g., setup projects), dependent tests are scheduled after their prerequisites.
  • Ordering: Tests may be ordered by file, by title, or randomly (for detecting order-dependent failures).

Phase 3 -- Execution: Workers execute tests according to the plan:

for each worker:
    while tests remain in queue:
        test = dequeue next test
        run beforeAll hooks (if entering new suite)
        run beforeEach hooks
        execute test body
        run afterEach hooks
        run afterAll hooks (if leaving suite)
        report result
        if failed and retries > 0:
            re-enqueue test with decremented retry count

Phase 4 -- Result Collection: As tests complete, results are streamed to configured reporters in real time. Each result includes:

  • Test identity (file, title, project)
  • Status (passed, failed, timed out, skipped)
  • Duration
  • Retry attempt number
  • Error details (if failed): message, stack trace
  • Attachments: screenshots, traces, videos

Phase 5 -- Reporting: Reporters transform results into their target format:

  • Console reporters (list, line, dot): Real-time output for terminal feedback.
  • File reporters (JSON, JUnit): Machine-readable output for CI integration.
  • Rich reporters (HTML): Interactive reports with filtering, search, trace viewing, and screenshot comparison.
  • Blob reporters: Intermediate format for aggregating results across shards before generating the final report.

Phase 6 -- Exit Code: The runner produces an exit code that summarizes the overall result:

  • 0: All tests passed (including those that passed after retry).
  • 1: One or more tests failed.
  • 130: Execution was interrupted (e.g., Ctrl+C).

Related Pages

Implemented By

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment