Implementation:Roboflow Rf detr RFDETR Train Config
Appearance
| Knowledge Sources | |
|---|---|
| Domains | Object_Detection, Training |
| Last Updated | 2026-02-08 15:00 GMT |
Overview
Concrete tool for configuring and launching RF-DETR fine-tuning provided by the RF-DETR library.
Description
RFDETR.train() accepts keyword arguments, validates them through TrainConfig (a Pydantic model), loads class names from the dataset, reinitializes the detection head for the correct class count, registers metric callbacks, and delegates to Model.train(). The TrainConfig class defines all training hyperparameters with sensible defaults for fine-tuning.
Usage
Call model.train(dataset_dir=...) on any RFDETR size variant to start fine-tuning.
Code Reference
Source Location
- Repository: rf-detr
- File: rfdetr/detr.py
- Lines: L89-94 (RFDETR.train), L167-238 (train_from_config)
- File: rfdetr/config.py
- Lines: L242-301 (TrainConfig, SegmentationTrainConfig)
Signature
class RFDETR:
def train(self, **kwargs) -> None:
"""
Train an RF-DETR model.
All training parameters are passed as keyword arguments.
"""
class TrainConfig(BaseModel):
lr: float = 1e-4
lr_encoder: float = 1.5e-4
batch_size: int = 4
grad_accum_steps: int = 4
epochs: int = 100
dataset_dir: str # Required
dataset_file: Literal["coco", "o365", "roboflow", "yolo"] = "roboflow"
output_dir: str = "output"
use_ema: bool = True
ema_decay: float = 0.993
early_stopping: bool = False
early_stopping_patience: int = 10
multi_scale: bool = True
tensorboard: bool = True
wandb: bool = False
weight_decay: float = 1e-4
warmup_epochs: float = 0.0
...
Import
from rfdetr import RFDETRBase
from rfdetr.config import TrainConfig
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| dataset_dir | str | Yes | Path to dataset root directory |
| epochs | int | No | Number of training epochs (default: 100) |
| batch_size | int | No | Batch size per device (default: 4) |
| lr | float | No | Base learning rate (default: 1e-4) |
| lr_encoder | float | No | Backbone learning rate (default: 1.5e-4) |
| output_dir | str | No | Directory for checkpoints and logs (default: "output") |
| use_ema | bool | No | Enable EMA model tracking (default: True) |
Outputs
| Name | Type | Description |
|---|---|---|
| Trained model | RFDETR | Model with updated weights (best EMA or regular) |
| Checkpoints | Files | checkpoint_best_total.pth, checkpoint_best_regular.pth, checkpoint_best_ema.pth |
| results.json | File | Per-class mAP, precision, recall, F1 scores |
Usage Examples
Basic Fine-tuning
from rfdetr import RFDETRBase
model = RFDETRBase()
model.train(
dataset_dir="/path/to/dataset",
epochs=50,
batch_size=8,
output_dir="./training_output",
)
Advanced Configuration
from rfdetr import RFDETRMedium
model = RFDETRMedium()
model.train(
dataset_dir="/path/to/dataset",
epochs=100,
batch_size=4,
grad_accum_steps=8,
lr=5e-5,
lr_encoder=7.5e-5,
use_ema=True,
early_stopping=True,
early_stopping_patience=15,
wandb=True,
project="my-detection-project",
)
Related Pages
Implements Principle
Requires Environment
Uses Heuristic
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment