Implementation:Tensorflow Tfjs Tf Model Functional
Metadata
| Field | Value |
|---|---|
| Implementation Name | Tensorflow Tfjs Tf Model Functional |
| Library | TensorFlow.js |
| Domains | Transfer_Learning, Neural_Networks |
| Type | API Doc |
| Implements | Principle:Tensorflow_Tfjs_Task_Head_Construction |
| Source | TensorFlow.js |
| Last Updated | 2026-02-10 00:00 GMT |
Environment:Tensorflow_Tfjs_Browser_Runtime
Overview
tf.model is the TensorFlow.js Functional API factory function for creating models with arbitrary graph topologies. In transfer learning, it is used to construct a unified model that connects the pretrained base model's input and feature extraction output to a new task-specific head. The companion layer factory functions -- tf.layers.dense, tf.layers.dropout, and tf.layers.flatten -- are used to build the task head layers that are applied on top of the extracted features.
Description
The Functional API enables constructing models where the computation graph is explicitly defined by chaining layer.apply() calls on SymbolicTensor objects. This is essential for transfer learning because the resulting model must span two distinct parts:
- The pretrained base -- from the base model's input to the feature extraction layer's output.
- The new task head -- from the feature output through new layers to the final prediction.
The tf.model() function takes the starting input(s) and ending output(s) of this graph and returns a complete LayersModel that can be compiled and trained.
Code Reference
Source files:
- tf.model: tfjs-layers/src/exports.ts (Lines 70-72)
- tf.layers.dense: tfjs-layers/src/exports_layers.ts (Lines 519-521)
- tf.layers.dropout: tfjs-layers/src/exports_layers.ts (Lines 533-535)
- tf.layers.flatten: tfjs-layers/src/exports_layers.ts (Lines 592-594)
API Signatures
// Functional model factory
tf.model(args: ContainerArgs): LayersModel
// ContainerArgs interface
interface ContainerArgs {
inputs: SymbolicTensor | SymbolicTensor[];
outputs: SymbolicTensor | SymbolicTensor[];
name?: string;
}
// Layer factories used in task head construction
tf.layers.dense(args: DenseLayerArgs): Dense
// DenseLayerArgs: { units: number, activation?: string, useBias?: boolean, kernelInitializer?: string, ... }
tf.layers.dropout(args: DropoutLayerArgs): Dropout
// DropoutLayerArgs: { rate: number, noiseShape?: number[], seed?: number }
tf.layers.flatten(args?: FlattenLayerArgs): Flatten
// FlattenLayerArgs: { dataFormat?: DataFormat }
Parameters
tf.model
| Parameter | Type | Required | Description |
|---|---|---|---|
| args.inputs | SymbolicTensor[] | Yes | The input(s) of the model. For transfer learning, this is typically baseModel.input. |
| args.outputs | SymbolicTensor[] | Yes | The output(s) of the model. For transfer learning, this is the final task head layer's output SymbolicTensor. |
| args.name | string |
No | Optional name for the model. |
tf.layers.dense
| Parameter | Type | Required | Description |
|---|---|---|---|
| units | number |
Yes | Number of output neurons. For the output layer, this equals the number of target classes. |
| activation | string |
No | Activation function: relu, softmax, sigmoid, linear (default), etc. |
| useBias | boolean |
No | Whether the layer uses a bias vector. Default: true. |
| kernelInitializer | string |
No | Initializer for the weight matrix. Default: glorotUniform. |
tf.layers.dropout
| Parameter | Type | Required | Description |
|---|---|---|---|
| rate | number |
Yes | Fraction of input units to drop during training (between 0 and 1). |
| noiseShape | number[] |
No | Shape of the binary dropout mask. |
| seed | number |
No | Random seed for reproducibility. |
tf.layers.flatten
| Parameter | Type | Required | Description |
|---|---|---|---|
| dataFormat | string |
No | Data format convention: channelsFirst or channelsLast. Default: channelsLast. |
Return Value
| Function | Return Type | Description |
|---|---|---|
| tf.model | LayersModel |
A complete model with the specified inputs and outputs, ready for compilation and training. |
| tf.layers.dense | Dense |
A Dense layer instance. Call .apply(input) to connect it in the graph. |
| tf.layers.dropout | Dropout |
A Dropout layer instance. Active during training, inactive during inference. |
| tf.layers.flatten | Flatten |
A Flatten layer instance. Reshapes multi-dimensional input to 1D. |
I/O Contract
| Direction | Description |
|---|---|
| Inputs | The base model's input SymbolicTensor (baseModel.input) and the feature extraction layer's output SymbolicTensor (featureLayer.output). These define the start and intermediate connection point of the computation graph. |
| Outputs | A new LayersModel that encompasses the frozen base model and the trainable task head as a single unified model. The model's output shape matches the target task (e.g., [batch, numClasses]). |
| Side Effects | Creates new layer instances and a new model object in memory. The base model's layers are shared (not copied) -- changes to their trainable property affect both models. |
| Errors | Throws if the SymbolicTensor graph is disconnected, if shapes are incompatible between connected layers, or if inputs/outputs are not valid SymbolicTensors. |
Usage Examples
Example 1: Build Task Head Using Functional API
// Build task head using Functional API
const featureOutput = baseModel.getLayer('conv_pw_13_relu').output;
const flat = tf.layers.flatten().apply(featureOutput);
const dense1 = tf.layers.dense({units: 128, activation: 'relu'}).apply(flat);
const drop = tf.layers.dropout({rate: 0.5}).apply(dense1);
const output = tf.layers.dense({units: 5, activation: 'softmax'}).apply(drop);
const transferModel = tf.model({
inputs: baseModel.input,
outputs: output
});
transferModel.summary();
Example 2: Minimal Task Head (Binary Classification)
// Minimal task head for binary classification
const featureOutput = baseModel.getLayer('conv_pw_13_relu').output;
const pooled = tf.layers.globalAveragePooling2d({}).apply(featureOutput);
const output = tf.layers.dense({units: 1, activation: 'sigmoid'}).apply(pooled);
const binaryModel = tf.model({
inputs: baseModel.input,
outputs: output
});
Example 3: Multi-Layer Task Head with Regularization
// Multi-layer task head with L2 regularization
const featureOutput = baseModel.getLayer('conv_pw_13_relu').output;
const flat = tf.layers.flatten().apply(featureOutput);
const dense1 = tf.layers.dense({
units: 256,
activation: 'relu',
kernelRegularizer: tf.regularizers.l2({l2: 0.01})
}).apply(flat);
const drop1 = tf.layers.dropout({rate: 0.5}).apply(dense1);
const dense2 = tf.layers.dense({
units: 128,
activation: 'relu',
kernelRegularizer: tf.regularizers.l2({l2: 0.01})
}).apply(drop1);
const drop2 = tf.layers.dropout({rate: 0.3}).apply(dense2);
const output = tf.layers.dense({units: 10, activation: 'softmax'}).apply(drop2);
const transferModel = tf.model({
inputs: baseModel.input,
outputs: output
});
Example 4: Regression Task Head
// Regression task head (e.g., predicting a continuous value)
const featureOutput = baseModel.getLayer('conv_pw_13_relu').output;
const pooled = tf.layers.globalAveragePooling2d({}).apply(featureOutput);
const dense1 = tf.layers.dense({units: 64, activation: 'relu'}).apply(pooled);
const output = tf.layers.dense({units: 1, activation: 'linear'}).apply(dense1);
const regressionModel = tf.model({
inputs: baseModel.input,
outputs: output
});
Usage
The Functional API (tf.model) is the preferred approach for building transfer learning models because it allows constructing non-linear graph topologies that connect pretrained and new layers. The construction workflow is:
- Get the feature SymbolicTensor -- baseModel.getLayer(name).output.
- Chain new layers -- Call layer.apply(previousOutput) sequentially to build the task head.
- Create the model -- Pass baseModel.input as inputs and the final layer's output as outputs to tf.model().
The resulting model shares layers with the base model. Freezing layers on the base model also freezes them in the transfer model, since they are the same objects.
Related Pages
- Principle:Tensorflow_Tfjs_Task_Head_Construction -- The principle this implementation realizes
- Implementation:Tensorflow_Tfjs_Container_GetLayer -- Getting the feature extraction layer's output
- Implementation:Tensorflow_Tfjs_Layer_Trainable_Setter -- Freezing base layers in the transfer model
- Implementation:Tensorflow_Tfjs_LayersModel_Compile_And_Fit_For_Transfer -- Compiling and training the constructed model
Environments
- Environment:Tensorflow_Tfjs_Browser_Runtime -- Browser runtime (WebGL / WebGPU / WASM / CPU backends)