Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Roboflow Rf detr Model Train

From Leeroopedia


Knowledge Sources
Domains Object_Detection, Training
Last Updated 2026-02-08 15:00 GMT

Overview

Concrete tool for executing the RF-DETR training loop with gradient accumulation, AMP, and multi-scale training.

Description

Model.train() orchestrates the complete training pipeline: building datasets and dataloaders, setting up the AdamW optimizer with layer-wise LR decay, constructing the LR scheduler (step or cosine), running per-epoch training via train_one_epoch, evaluating with evaluate, and managing checkpoints. train_one_epoch handles the per-iteration mechanics of gradient accumulation, AMP scaling, gradient clipping, EMA updates, and drop path scheduling.

Usage

Called internally by RFDETR.train() after configuration. Not typically called directly by users.

Code Reference

Source Location

  • Repository: rf-detr
  • File: rfdetr/main.py
  • Lines: L175-552 (Model.train)
  • File: rfdetr/engine.py
  • Lines: L51-180 (train_one_epoch)

Signature

class Model:
    def train(self, callbacks: DefaultDict[str, List[Callable]], **kwargs) -> None:
        """
        Execute the full training loop.

        Args:
            callbacks: Dict of callback lists keyed by event name
                ("on_fit_epoch_end", "on_train_batch_start", "on_train_end")
            **kwargs: Training hyperparameters (lr, epochs, batch_size, etc.)
        """

def train_one_epoch(
    model: torch.nn.Module,
    criterion: torch.nn.Module,
    lr_scheduler: torch.optim.lr_scheduler.LRScheduler,
    data_loader: Iterable,
    optimizer: torch.optim.Optimizer,
    device: torch.device,
    epoch: int,
    batch_size: int,
    max_norm: float = 0,
    ema_m: torch.nn.Module = None,
    schedules: dict = {},
    num_training_steps_per_epoch: int = None,
    vit_encoder_num_layers: int = None,
    args=None,
    callbacks: DefaultDict[str, List[Callable]] = None,
) -> Dict[str, float]:
    """Train for one epoch, return metric averages."""

Import

from rfdetr.main import Model
from rfdetr.engine import train_one_epoch

I/O Contract

Inputs

Name Type Required Description
callbacks DefaultDict[str, List[Callable]] Yes Event callbacks for metrics, early stopping
lr float No Base learning rate
epochs int No Number of training epochs
batch_size int No Batch size per device
grad_accum_steps int No Gradient accumulation steps
dataset_dir str Yes Dataset root directory

Outputs

Name Type Description
train_stats Dict[str, float] Per-epoch loss, class_error, lr
checkpoints Files Saved to output_dir every checkpoint_interval epochs
Best model File checkpoint_best_total.pth (best regular or EMA)

Usage Examples

Standard Training (via RFDETR API)

from rfdetr import RFDETRBase

model = RFDETRBase()
model.train(
    dataset_dir="/path/to/dataset",
    epochs=50,
    batch_size=4,
    grad_accum_steps=4,
)
# Training loop runs automatically with evaluation each epoch

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment