Workflow:CrewAIInc CrewAI Sequential Crew Execution
| Knowledge Sources | |
|---|---|
| Domains | Multi_Agent_Systems, LLM_Orchestration, Task_Automation |
| Last Updated | 2026-02-11 18:00 GMT |
Overview
End-to-end process for defining AI agents with specialized roles, assigning them ordered tasks, and executing a sequential crew workflow that produces structured output.
Description
This workflow represents the most fundamental CrewAI use case: assembling a team of autonomous AI agents and running them through a sequence of tasks. Each agent has a defined role, goal, and backstory that shapes its behavior. Tasks are executed one after another, with each subsequent task receiving context from previous task outputs. The workflow covers project scaffolding via the CLI, YAML-based configuration of agents and tasks, Python crew definition using the @CrewBase decorator pattern, and execution via crew.kickoff(). The final output is a CrewOutput object containing raw text, optional structured JSON/Pydantic data, and per-task output details.
Usage
Execute this workflow when you need to automate a multi-step process using LLM-powered agents that work in a defined order. Typical triggers include: you have a task that benefits from specialized agent roles (e.g., researcher then writer), you need chain-of-thought processing where each step builds on previous results, or you want to scaffold a new CrewAI project from scratch using the recommended YAML configuration pattern.
Execution Steps
Step 1: Project Scaffolding
Create a new CrewAI project using the CLI tool. This generates the standard directory structure with YAML configuration files for agents and tasks, a crew definition module, and a main entry point. The scaffold includes a .env file for API keys, a pyproject.toml for dependency management, and separate configuration directories for agents and tasks.
Key considerations:
- Requires Python 3.10+ and the crewai package installed
- The CLI creates a UV-based project structure by default
- Project name determines the Python module namespace
Step 2: Agent Configuration
Define agents in the agents.yaml configuration file or directly in Python. Each agent requires a role (what the agent does), a goal (its objective), and a backstory (contextual personality). Optionally configure the LLM model, available tools, memory settings, and delegation permissions. Agents can use template variables (e.g., {topic}) that are interpolated at runtime from kickoff inputs.
Key considerations:
- Each agent should have a distinct, non-overlapping role
- The backstory shapes the agent's approach and tone
- Tools are optional but enable agents to access external data
- Set allow_delegation=True to let agents pass work to other agents
Step 3: Task Definition
Define tasks in tasks.yaml or in Python. Each task has a description (what to accomplish), an expected_output (format and content expectations), and an agent assignment. Tasks can specify output formats (raw text, JSON, or Pydantic models), output files, guardrails for validation, and context dependencies on other tasks. The task order in the list determines execution sequence.
Key considerations:
- Tasks execute in the order they appear in the tasks list
- The context parameter lets later tasks access earlier task outputs
- Use output_pydantic or output_json for structured results
- Guardrails validate output quality before accepting results
Step 4: Crew Assembly
Create the Crew object by combining agents and tasks. Set the process parameter to Process.sequential for ordered execution. Configure crew-level settings including memory (for cross-task context), caching (to avoid redundant LLM calls), verbose logging, and optional knowledge sources. Use the @CrewBase decorator class pattern for YAML-driven configuration or plain Python for programmatic setup.
Key considerations:
- Sequential process executes tasks in list order
- Memory enables agents to retain context across tasks
- Cache reduces duplicate tool invocations and LLM calls
- Planning mode generates execution plans before running
Step 5: Execution
Call crew.kickoff(inputs={...}) to start the sequential workflow. The inputs dictionary provides template variables that are interpolated into agent and task descriptions. Each task runs in order, with its assigned agent executing tool calls, reasoning through the problem, and producing output. Task outputs flow as context to downstream tasks that reference them.
Key considerations:
- Inputs are interpolated into all agent and task text fields
- Each task may take multiple LLM iterations to complete
- Async variant crew.kickoff_async() returns a coroutine
- kickoff_for_each() runs the same crew with multiple input sets
Step 6: Output Processing
Access the CrewOutput returned by kickoff. The output provides raw (text string), json_dict (parsed JSON if output format was JSON), pydantic (Pydantic model instance if configured), tasks_output (list of per-task TaskOutput objects), and token_usage (token consumption metrics). Optionally save outputs to files via the output_file task parameter.
Key considerations:
- CrewOutput aggregates the final task output as the crew result
- Individual task outputs are accessible via tasks_output list
- Token usage tracks prompt and completion tokens across all calls
- Output files are written automatically when output_file is set