Implementation:Fastai Fastbook Fine Tune
| Knowledge Sources | |
|---|---|
| Domains | Deep_Learning, Transfer_Learning, Optimization |
| Last Updated | 2026-02-09 17:00 GMT |
Overview
Concrete tools for training a transfer-learning model provided by Learner.fine_tune, Learner.fit_one_cycle, and Learner.unfreeze from fastai.callback.schedule.
Description
The fine_tune method implements the complete two-phase transfer learning training protocol in a single call: it freezes the body, trains the head for a specified number of warmup epochs, then unfreezes the body and trains the full model with discriminative learning rates for the main epochs.
For practitioners who need more control, fit_one_cycle and unfreeze can be used separately to implement custom training schedules with explicit learning rate ranges.
Usage
Call learn.fine_tune(epochs) as the primary training method after creating a cnn_learner. Use fit_one_cycle and unfreeze when you need to customize the number of frozen epochs, the discriminative learning rate range, or when you want to interleave lr_find calls between phases.
Code Reference
Source Location
- Repository: fastbook
- File: translations/cn/02_production.md (lines 402-414), translations/cn/05_pet_breeds.md (lines 648-715)
Signature
# All-in-one fine-tuning
Learner.fine_tune(
epochs, # Number of unfrozen training epochs
base_lr=None, # Base learning rate (auto-detected if None)
freeze_epochs=1, # Number of frozen epochs for head warmup
lr_mult=100, # LR multiplier between lowest and highest layer group
pct_start=0.3, # Percentage of training spent in warmup phase
div=5.0, # Initial LR = base_lr / div during warmup start
div_final=1e5, # Final LR = base_lr / div_final at end of annealing
wd=None, # Weight decay
moms=None, # Momentum range
cbs=None # Additional callbacks
)
# Manual one-cycle training
Learner.fit_one_cycle(
n_epoch, # Number of epochs
lr_max=None, # Maximum learning rate (or slice for discriminative LRs)
div=25.0, # LR at start = lr_max / div
div_final=1e5, # LR at end = lr_max / div_final
pct_start=0.25, # Fraction of training spent in warmup
wd=None, # Weight decay
moms=None, # Momentum range (default: (0.95, 0.85, 0.95))
cbs=None # Additional callbacks
)
# Unfreeze all model parameters
Learner.unfreeze()
Import
from fastai.vision.all import Learner, cnn_learner
# fine_tune, fit_one_cycle, and unfreeze are methods on Learner
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| epochs | int | Yes | Number of epochs to train the unfrozen model (fine_tune) or total epochs (fit_one_cycle) |
| base_lr | float | No | Base learning rate; auto-detected from lr_find if None |
| freeze_epochs | int | No | Number of epochs to train with body frozen (default: 1) |
| lr_mult | int | No | Ratio between the highest and lowest layer group learning rates (default: 100) |
| lr_max | float or slice | No | Maximum LR for fit_one_cycle; use slice(low, high) for discriminative rates |
| n_epoch | int | Yes | Number of training epochs for fit_one_cycle |
Outputs
| Name | Type | Description |
|---|---|---|
| trained model | Learner (mutated in place) | The Learner with updated model weights after training |
| training log | printed table | Epoch-by-epoch table showing train_loss, valid_loss, and specified metrics |
Usage Examples
Basic Usage: fine_tune (Chapter 2)
from fastai.vision.all import *
path = untar_data(URLs.PETS) / 'images'
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path),
valid_pct=0.2, seed=42,
label_func=lambda x: x[0].isupper(),
item_tfms=Resize(224)
)
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(4)
# Output:
# epoch train_loss valid_loss error_rate time
# 0 0.163754 0.021388 0.005413 00:14 (frozen)
# 0 0.058871 0.025816 0.009472 00:19 (unfrozen)
# 1 0.038910 0.018753 0.006766 00:19
# 2 0.024956 0.016284 0.004060 00:19
# 3 0.017429 0.016641 0.005413 00:19
Manual Two-Phase Training (Chapter 5)
from fastai.vision.all import *
learn = cnn_learner(dls, resnet34, metrics=error_rate)
# Phase 1: Train only the head
learn.fit_one_cycle(3, 3e-3)
# Phase 2: Unfreeze and train with discriminative LRs
learn.unfreeze()
# Find appropriate LR for unfrozen model
lr_min, lr_steep = learn.lr_find()
# Train with lower LR for body, higher for head
learn.fit_one_cycle(6, lr_max=slice(1e-6, 1e-4))
Custom Freeze Epochs and Learning Rate
from fastai.vision.all import *
learn = cnn_learner(dls, resnet50, metrics=[accuracy, error_rate])
# Use lr_find to determine the base learning rate
lr_min, lr_steep = learn.lr_find()
# Fine-tune with 2 frozen warmup epochs and custom base LR
learn.fine_tune(5, base_lr=lr_min, freeze_epochs=2)
Related Pages
Implements Principle
Requires Environment
- Environment:Fastai_Fastbook_Python_FastAI_Environment
- Environment:Fastai_Fastbook_CUDA_GPU_Environment