Implementation:CrewAIInc CrewAI Crew Train Method
Appearance
Overview
Concrete method for running iterative training loops with human feedback collection and persistence provided by the CrewAI framework.
Source
Signature
def train(
self,
n_iterations: int,
filename: str,
inputs: dict[str, Any] | None = None,
) -> None
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
n_iterations |
int |
Yes | Number of training iterations to run |
filename |
str |
Yes | Path to the file where training data will be persisted (pickle format) |
inputs |
None | No | Optional dictionary of input variables for task interpolation |
I/O
- Input:
n_iterations(number of training loops),filename(output training data file path), and optionalinputs(variable dictionary for task templates) - Output: Training data file written to disk containing per-agent feedback records. No return value (returns
None).
Internal Behavior
The train method performs the following steps:
- Sets
human_input=Trueon all tasks — Iterates throughself.tasksand sets each task'shuman_inputattribute toTrue, ensuring human reviewers are prompted for feedback after each task execution. - Disables delegation on all agents — Iterates through
self.agentsand setsallow_delegation=False, ensuring each agent handles its assigned work directly without passing it to another agent. - Runs
kickoff()n_iterations times — Executes the full crew workflow in a loop. Each iteration runs the complete task pipeline, collecting human feedback at each task boundary. - Persists training data via
CrewTrainingHandler— After each iteration, training data (including human feedback) is saved to the specified pickle file using theCrewTrainingHandlerutility. TaskEvaluatorevaluates per-agent training data — After all iterations complete, aTaskEvaluatorinstance analyzes the accumulated training data for each agent, generating refined prompts and behavior recommendations.
If an exception occurs during any iteration, the method catches the error and re-raises it as a wrapped exception with context about which iteration failed.
Import
from crewai import Crew
Example
from crewai import Crew, Agent, Task, Process
# Define agents and tasks
researcher = Agent(
role="Senior Research Analyst",
goal="Uncover cutting-edge developments in {topic}",
backstory="You are an expert research analyst.",
verbose=True,
)
research_task = Task(
description="Conduct thorough research about {topic}.",
expected_output="A detailed research report with key findings.",
agent=researcher,
)
# Configure crew
crew = Crew(
agents=[researcher],
tasks=[research_task],
process=Process.sequential,
verbose=True,
memory=True,
)
# Run training: 3 iterations with human feedback
crew.train(
n_iterations=3,
filename="training_data.pkl",
inputs={"topic": "AI"},
)
# During each iteration, the human reviewer will be prompted
# to provide feedback on each task's output.
# After all 3 iterations, the TaskEvaluator generates
# refined agent behaviors based on collected feedback.
Key Implementation Details
- The method modifies task and agent attributes in-place before starting the training loop. This means that after calling
train(), tasks will still havehuman_input=Trueand agents will haveallow_delegation=False. - Training data is accumulated incrementally — if the file already exists, new iterations append to existing data rather than overwriting.
- The
CrewTrainingHandleruses Python'spicklemodule for serialization, so the training data file is in binary pickle format.
Principle
Principle:CrewAIInc_CrewAI_Training_Execution
References
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment