Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:PrefectHQ Prefect Task Retry

From Leeroopedia


Metadata
Sources
Domains
Last Updated 2026-02-09 00:00 GMT

Overview

A decorator-based mechanism that transforms Python functions into independently retryable, observable pipeline steps with automatic failure recovery and exponential backoff.

Description

The @task decorator wraps individual functions to provide fine-grained observability and resilience. Tasks are the building blocks within flows -- each task run is independently tracked, can be retried on failure with configurable backoff, and produces its own state transitions.

Tasks enable:

  • Separation of concerns -- distinct steps (fetch, transform, load) are isolated units
  • Independent retry policies per step -- a flaky network call can retry 5 times while a local transform retries 0 times
  • Caching of results -- expensive computations can be cached and reused across runs
  • Concurrent execution -- tasks can be submitted concurrently for parallel processing

Each task run transitions through its own state lifecycle (Pending -> Running -> Completed or Failed), independent of the parent flow and sibling tasks.

Usage

Use @task when a function performs a discrete unit of work within a flow that might fail transiently and should be retried independently of other steps. Common scenarios include:

  • Network calls -- HTTP requests to external APIs
  • API requests -- third-party service integrations
  • Database operations -- reads and writes that may encounter connection issues
  • File I/O -- operations on remote storage systems
from prefect import flow, task

@task(retries=3, retry_delay_seconds=[2, 5, 15])
def fetch_data(url: str) -> dict:
    """Fetch data with automatic retry on failure."""
    response = httpx.get(url, timeout=30)
    response.raise_for_status()
    return response.json()

@task
def transform(raw: dict) -> dict:
    """Transform step - no retries needed for pure computation."""
    return {k: v.strip() for k, v in raw.items()}

@flow
def pipeline(url: str):
    raw = fetch_data(url)
    cleaned = transform(raw)
    return cleaned

Theoretical Basis

Tasks implement the Retry Pattern with exponential backoff. The key insight is that transient failures -- network timeouts, rate limits, temporary unavailability -- can be recovered from by waiting and retrying.

The retry_delay_seconds parameter supports multiple backoff strategies:

Strategy Configuration Behavior
Fixed delay retry_delay_seconds=5 Wait exactly 5 seconds between each retry
Custom sequence retry_delay_seconds=[2, 5, 15] Wait 2s, then 5s, then 15s (escalating backoff)
Exponential backoff retry_delay_seconds=[1, 2, 4, 8, 16] Doubling delay pattern

The retry pattern is effective because:

  • Transient failures are common -- network jitter, rate limiting, and temporary outages are normal in distributed systems
  • Retrying is cheap -- the cost of a retry is typically much less than the cost of a complete pipeline restart
  • Independent retries are efficient -- only the failed step is retried, not the entire workflow
  • Backoff prevents thundering herd -- increasing delays reduce load on struggling services

Pseudocode for task retry logic:

for attempt in range(1, retries + 1):
    try:
        result = task_fn(*args, **kwargs)
        return result  # Success
    except Exception:
        if attempt <= retries:
            delay = retry_delay_seconds[attempt - 1]
            sleep(delay)
        else:
            raise  # All retries exhausted

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment