Principle:LaurentMazare Tch rs Feature Extraction
| Knowledge Sources | |
|---|---|
| Domains | Computer_Vision, Transfer_Learning |
| Last Updated | 2026-02-08 14:00 GMT |
Overview
Transfer learning technique that uses a pretrained model without its final classification layer as a fixed feature extractor for downstream tasks.
Description
Feature extraction removes the final fully-connected (classification) layer from a pretrained model, producing a lower-dimensional embedding (e.g., 512-dim for ResNet-18) instead of class logits. The model backbone (convolution layers, pooling) is frozen and used to transform raw images into feature vectors. These features capture general visual patterns learned from ImageNet and can be reused for new classification tasks with a simple linear classifier trained on top.
Usage
Use when you have a small custom dataset and want to leverage pretrained visual features. Freeze the backbone (no_grad) and pre-compute features for all images once, then train a lightweight classifier on the features.
Theoretical Basis
Standard ResNet-18: Input → [backbone] → [fc: 512→1000] → class logits
No-Final-Layer: Input → [backbone] → 512-dim feature vector
Transfer Learning Pipeline:
1. Load ResNet-18 without final layer (produces 512-dim features)
2. Load pretrained weights (fc weights missing, silently ignored)
3. no_grad { pre-compute features for all images }
4. Train new linear classifier: 512 → num_custom_classes