Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Fastai Fastbook Collab Learner Dot Product

From Leeroopedia


Knowledge Sources
Domains Recommender Systems, Matrix Factorization
Last Updated 2026-02-09 17:00 GMT

Overview

Concrete tool for creating and training a dot-product collaborative filtering model with bias terms provided by fastai.collab.

Description

collab_learner is a factory function that constructs a fastai Learner preconfigured for collaborative filtering. When called without use_nn=True, it creates an EmbeddingDotBias model internally. This model contains four embedding layers: user weights, item weights, user biases, and item biases. The forward pass computes the dot product of user and item embeddings, adds the bias terms, and applies sigmoid_range to constrain the output to the specified y_range. Training uses the one-cycle learning rate policy and MSE loss.

Usage

Use collab_learner with the dot-product architecture (the default) as the standard approach for explicit-rating collaborative filtering. Pass the DataLoaders from CollabDataLoaders.from_df, specify the number of latent factors, and set y_range and wd for regularization. After training, access the model's embeddings and biases for downstream analysis.

Code Reference

Source Location

  • Repository: fastbook
  • File: translations/cn/08_collab.md (Lines 553-580)

Signature

collab_learner(
    dls: DataLoaders,
    n_factors: int = 50,
    use_nn: bool = False,
    emb_szs: dict = None,
    layers: list = None,
    config: dict = None,
    y_range: tuple = None,
    loss_func: callable = None,
    **kwargs
) -> Learner

When use_nn=False (default), the internal model is:

EmbeddingDotBias(
    n_factors: int,
    n_users: int,
    n_items: int,
    y_range: tuple = None
)
# Internal structure:
#   u_weight: Embedding(n_users, n_factors)
#   i_weight: Embedding(n_items, n_factors)
#   u_bias:   Embedding(n_users, 1)
#   i_bias:   Embedding(n_items, 1)

Import

from fastai.collab import collab_learner, EmbeddingDotBias

I/O Contract

Inputs

Name Type Required Description
dls DataLoaders Yes DataLoaders created by CollabDataLoaders.from_df
n_factors int No Number of latent factors per user and item embedding; defaults to 50
y_range tuple No Output range as (low, high); e.g., (0, 5.5) for ratings clamped to approximately 0-5
wd float No Weight decay (L2 regularization) passed to fit_one_cycle; e.g., 0.1

Outputs

Name Type Description
learn Learner A fastai Learner wrapping the EmbeddingDotBias model, MSE loss, and optimizer
learn.model EmbeddingDotBias The trained model with four embedding layers
learn.model.u_weight Embedding(944, 50) User latent factor embeddings (for MovieLens 100K)
learn.model.i_weight Embedding(1635, 50) Item latent factor embeddings (for MovieLens 100K)
learn.model.u_bias Embedding(944, 1) Per-user bias values
learn.model.i_bias Embedding(1635, 1) Per-item bias values

Usage Examples

Basic Usage

from fastai.collab import *
from fastai.tabular.all import *

# Assumes 'dls' DataLoaders already created
# (see Fastai_Fastbook_CollabDataLoaders_From_Df)

# Create a dot-product collaborative filtering learner
learn = collab_learner(dls, n_factors=50, y_range=(0, 5.5))

# Train for 5 epochs with one-cycle policy and weight decay
learn.fit_one_cycle(5, 5e-3, wd=0.1)
# Output:
# epoch  train_loss  valid_loss  time
# 0      0.931751    0.953806    00:13
# 1      0.851826    0.878119    00:13
# 2      0.715254    0.834711    00:13
# 3      0.583173    0.821470    00:13
# 4      0.496625    0.821688    00:13

# Inspect the model architecture
print(learn.model)
# Output:
# EmbeddingDotBias(
#   (u_weight): Embedding(944, 50)
#   (i_weight): Embedding(1635, 50)
#   (u_bias): Embedding(944, 1)
#   (i_bias): Embedding(1635, 1)
# )

# Access learned movie biases for analysis
movie_bias = learn.model.i_bias.weight.squeeze()
top_bias_idxs = movie_bias.argsort(descending=True)[:5]
top_movies = [dls.classes['title'][i] for i in top_bias_idxs]
print(top_movies)
# Output: ['Titanic (1997)', "Schindler's List (1993)", ...]

Related Pages

Implements Principle

Requires Environment

Uses Heuristic

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment