Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:CrewAIInc CrewAI Crew Train Method

From Leeroopedia

Overview

Concrete method for running iterative training loops with human feedback collection and persistence provided by the CrewAI framework.

Source

src/crewai/crew.py:L644-694

Signature

def train(
    self,
    n_iterations: int,
    filename: str,
    inputs: dict[str, Any] | None = None,
) -> None

Parameters

Parameter Type Required Description
n_iterations int Yes Number of training iterations to run
filename str Yes Path to the file where training data will be persisted (pickle format)
inputs None No Optional dictionary of input variables for task interpolation

I/O

  • Input: n_iterations (number of training loops), filename (output training data file path), and optional inputs (variable dictionary for task templates)
  • Output: Training data file written to disk containing per-agent feedback records. No return value (returns None).

Internal Behavior

The train method performs the following steps:

  1. Sets human_input=True on all tasks — Iterates through self.tasks and sets each task's human_input attribute to True, ensuring human reviewers are prompted for feedback after each task execution.
  2. Disables delegation on all agents — Iterates through self.agents and sets allow_delegation=False, ensuring each agent handles its assigned work directly without passing it to another agent.
  3. Runs kickoff() n_iterations times — Executes the full crew workflow in a loop. Each iteration runs the complete task pipeline, collecting human feedback at each task boundary.
  4. Persists training data via CrewTrainingHandler — After each iteration, training data (including human feedback) is saved to the specified pickle file using the CrewTrainingHandler utility.
  5. TaskEvaluator evaluates per-agent training data — After all iterations complete, a TaskEvaluator instance analyzes the accumulated training data for each agent, generating refined prompts and behavior recommendations.

If an exception occurs during any iteration, the method catches the error and re-raises it as a wrapped exception with context about which iteration failed.

Import

from crewai import Crew

Example

from crewai import Crew, Agent, Task, Process

# Define agents and tasks
researcher = Agent(
    role="Senior Research Analyst",
    goal="Uncover cutting-edge developments in {topic}",
    backstory="You are an expert research analyst.",
    verbose=True,
)

research_task = Task(
    description="Conduct thorough research about {topic}.",
    expected_output="A detailed research report with key findings.",
    agent=researcher,
)

# Configure crew
crew = Crew(
    agents=[researcher],
    tasks=[research_task],
    process=Process.sequential,
    verbose=True,
    memory=True,
)

# Run training: 3 iterations with human feedback
crew.train(
    n_iterations=3,
    filename="training_data.pkl",
    inputs={"topic": "AI"},
)
# During each iteration, the human reviewer will be prompted
# to provide feedback on each task's output.
# After all 3 iterations, the TaskEvaluator generates
# refined agent behaviors based on collected feedback.

Key Implementation Details

  • The method modifies task and agent attributes in-place before starting the training loop. This means that after calling train(), tasks will still have human_input=True and agents will have allow_delegation=False.
  • Training data is accumulated incrementally — if the file already exists, new iterations append to existing data rather than overwriting.
  • The CrewTrainingHandler uses Python's pickle module for serialization, so the training data file is in binary pickle format.

Principle

Principle:CrewAIInc_CrewAI_Training_Execution

References

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment