Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Fastai Fastbook Lr Find

From Leeroopedia


Knowledge Sources
Domains Deep_Learning, Optimization, Computer_Vision
Last Updated 2026-02-09 17:00 GMT

Overview

Concrete tool for empirically determining the optimal learning rate provided by fastai.callback.schedule.Learner.lr_find.

Description

The lr_find method on a Learner object executes Leslie Smith's learning rate range test. It trains the model for a small number of iterations with an exponentially increasing learning rate, records the loss at each step, and produces a plot of loss vs. learning rate. It also returns two suggested learning rate values: the valley minimum (divided by 10 for safety) and the steepest descent point.

After lr_find completes, the model weights are restored to their state before the test -- no permanent changes are made.

Usage

Call learn.lr_find() after creating a cnn_learner and before calling fine_tune or fit_one_cycle. Also call it after unfreezing the model to find the appropriate learning rate for the unfrozen body layers.

Code Reference

Source Location

  • Repository: fastbook
  • File: translations/cn/05_pet_breeds.md (lines 629-640)

Signature

Learner.lr_find(
    start_lr=1e-7,    # Starting learning rate (very small)
    end_lr=10,         # Ending learning rate (very large)
    num_it=100,        # Number of iterations to run
    stop_div=True,     # Stop early if loss diverges
    show_plot=True,    # Display the loss vs. LR plot
    suggest_funcs=(valley, steep)  # Functions to compute suggested LRs
)

Import

from fastai.vision.all import cnn_learner, resnet34, Learner
# lr_find is a method on Learner, no separate import needed

I/O Contract

Inputs

Name Type Required Description
start_lr float No Starting learning rate for the sweep (default: 1e-7)
end_lr float No Ending learning rate for the sweep (default: 10)
num_it int No Number of training iterations to run (default: 100)
stop_div bool No Stop early if loss exceeds 4x the minimum observed loss (default: True)
show_plot bool No Display the matplotlib plot of loss vs. learning rate (default: True)
suggest_funcs tuple No Tuple of functions that compute suggested LR values (default: (valley, steep))

Outputs

Name Type Description
lr_min float Suggested learning rate from the valley method (minimum loss / 10)
lr_steep float Suggested learning rate from the steepest descent method
plot matplotlib figure Plot of loss (y-axis) vs. learning rate on log scale (x-axis) with suggested values marked

Usage Examples

Basic Usage: Find LR Before Training

from fastai.vision.all import *

path = untar_data(URLs.PETS) / 'images'

pets = DataBlock(
    blocks=(ImageBlock, CategoryBlock),
    get_items=get_image_files,
    splitter=RandomSplitter(valid_pct=0.2, seed=42),
    get_y=using_attr(RegexLabeller(r'^(.+)_\d+.jpg$'), 'name'),
    item_tfms=Resize(460),
    batch_tfms=aug_transforms(size=224, min_scale=0.75)
)
dls = pets.dataloaders(path, bs=64)

learn = cnn_learner(dls, resnet34, metrics=error_rate)

# Run the learning rate finder
lr_min, lr_steep = learn.lr_find()
print(f'Valley (safe) LR: {lr_min:.2e}')
print(f'Steepest descent LR: {lr_steep:.2e}')

Using the Result for Training

# After finding the learning rate, use it for training
lr_min, lr_steep = learn.lr_find()

# Use the suggested valley learning rate
learn.fine_tune(3, base_lr=lr_min)

After Unfreezing

# First train the head only
learn.fit_one_cycle(3, 3e-3)

# Unfreeze and find LR for the full model
learn.unfreeze()
lr_min, lr_steep = learn.lr_find()
print(f'Suggested LR for unfrozen model: {lr_min:.2e}')

# Train with discriminative learning rates
learn.fit_one_cycle(6, lr_max=slice(1e-6, lr_min))

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment