Implementation:Tensorflow Tfjs Layer Trainable Setter
Metadata
| Field | Value |
|---|---|
| Implementation Name | Tensorflow Tfjs Layer Trainable Setter |
| Library | TensorFlow.js |
| Domains | Transfer_Learning, Optimization |
| Type | API Doc |
| Implements | Principle:Tensorflow_Tfjs_Layer_Freezing |
| Source | TensorFlow.js |
| Last Updated | 2026-02-10 00:00 GMT |
Environment:Tensorflow_Tfjs_Browser_Runtime
Overview
The layer.trainable setter is the TensorFlow.js mechanism for freezing or unfreezing individual layers in a model. Setting layer.trainable = false prevents the layer's weights from being updated during training, effectively preserving the pretrained representations. This setter is defined on the base Layer class and is therefore available on every layer type in TensorFlow.js.
Description
The trainable setter on the Layer class performs two operations when called:
- It updates the trainable flag on each of the layer's trainable weight variables (_trainableWeights[i].trainable).
- It updates the layer's own internal trainable_ flag.
When trainable is set to false, the layer's weights are moved from the trainableWeights collection to nonTrainableWeights from the perspective of the optimizer. The model's compile method should be called after modifying trainable flags to ensure the optimizer is configured with the correct set of parameters.
Code Reference
Source file: tfjs-layers/src/engine/topology.ts (Lines 683-686)
Implementation
// Layer class setter
set trainable(trainable: boolean) {
this._trainableWeights.forEach(w => w.trainable = trainable);
this.trainable_ = trainable;
}
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| trainable | boolean |
Yes | When true, the layer's weights will receive gradient updates during training. When false, the layer's weights are frozen and will not be updated. |
Return Value
| Type | Description |
|---|---|
void |
This is a setter; it does not return a value. It mutates the layer's internal state in-place. |
I/O Contract
| Direction | Description |
|---|---|
| Inputs | A Layer instance and a boolean flag (true for trainable, false for frozen). |
| Outputs | The Layer with its updated trainable state. All _trainableWeights[i].trainable flags are set to the given value, and the layer's internal trainable_ flag is updated. |
| Side Effects | Mutates the layer's _trainableWeights array entries and the internal trainable_ flag. The model should be recompiled after changing these flags. |
| Errors | None directly. However, if the model is not recompiled after changing trainable flags, the optimizer may not reflect the updated parameter set. |
Usage Examples
Example 1: Freeze All Base Model Layers
// Freeze all base model layers
baseModel.layers.forEach(layer => {
layer.trainable = false;
});
// Verify frozen state
console.log('Trainable weights:', baseModel.trainableWeights.length); // 0
Example 2: Selective Unfreezing (Fine-Tune Last Few Layers)
// Selective unfreezing (fine-tune last few layers)
for (let i = baseModel.layers.length - 5; i < baseModel.layers.length; i++) {
baseModel.layers[i].trainable = true;
}
// Check which layers are trainable
baseModel.layers.forEach((layer, i) => {
console.log(`${i}: ${layer.name} - trainable: ${layer.trainable}`);
});
Example 3: Freeze by Layer Type
// Freeze only convolutional layers, leave batch norm unfrozen
baseModel.layers.forEach(layer => {
if (layer.getClassName() === 'Conv2D' || layer.getClassName() === 'DepthwiseConv2D') {
layer.trainable = false;
}
});
// Count trainable vs. non-trainable parameters
const trainableCount = baseModel.trainableWeights.reduce(
(sum, w) => sum + w.shape.reduce((a, b) => a * b, 1), 0
);
const nonTrainableCount = baseModel.nonTrainableWeights.reduce(
(sum, w) => sum + w.shape.reduce((a, b) => a * b, 1), 0
);
console.log(`Trainable params: ${trainableCount}`);
console.log(`Non-trainable params: ${nonTrainableCount}`);
Example 4: Two-Phase Training (Freeze Then Unfreeze)
// Phase 1: Freeze all base layers, train only the task head
baseModel.layers.forEach(layer => { layer.trainable = false; });
transferModel.compile({
optimizer: tf.train.adam(0.001),
loss: 'categoricalCrossentropy',
metrics: ['accuracy']
});
await transferModel.fit(trainXs, trainYs, { epochs: 10 });
// Phase 2: Unfreeze last few base layers for fine-tuning
for (let i = baseModel.layers.length - 10; i < baseModel.layers.length; i++) {
baseModel.layers[i].trainable = true;
}
// IMPORTANT: Recompile with a lower learning rate
transferModel.compile({
optimizer: tf.train.adam(0.00001),
loss: 'categoricalCrossentropy',
metrics: ['accuracy']
});
await transferModel.fit(trainXs, trainYs, { epochs: 10 });
Example 5: Verify Freeze State on a Transfer Model
// After building the transfer model, verify the parameter counts
transferModel.summary();
// The summary will show:
// Total params: X
// Trainable params: Y (only task head params)
// Non-trainable params: Z (frozen base model params)
Usage
The trainable setter is used at two critical points in the transfer learning workflow:
- Before constructing the transfer model -- Freeze all base model layers so only the new task head is trainable.
- During fine-tuning (optional) -- Selectively unfreeze later base layers to allow them to adapt to the target task.
Important: Always call model.compile() after modifying trainable flags. The compile step configures the optimizer with the current set of trainable weights. Failing to recompile means the optimizer may attempt to update frozen weights or miss newly unfrozen weights.
Related Pages
- Principle:Tensorflow_Tfjs_Layer_Freezing -- The principle this implementation realizes
- Implementation:Tensorflow_Tfjs_Tf_LoadLayersModel_For_Transfer -- Loading the base model whose layers will be frozen
- Implementation:Tensorflow_Tfjs_Container_GetLayer -- Selecting specific layers to freeze or unfreeze
- Implementation:Tensorflow_Tfjs_LayersModel_Compile_And_Fit_For_Transfer -- Compiling and training with frozen layers
Environments
- Environment:Tensorflow_Tfjs_Browser_Runtime -- Browser runtime (WebGL / WebGPU / WASM / CPU backends)