Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Tensorflow Tfjs Tf LoadLayersModel For Transfer

From Leeroopedia


Metadata

Field Value
Implementation Name Tensorflow Tfjs Tf LoadLayersModel For Transfer
Library TensorFlow.js
Domains Transfer_Learning, Model_Loading
Type API Doc (transfer learning context)
Implements Principle:Tensorflow_Tfjs_Base_Model_Loading
Source TensorFlow.js
Last Updated 2026-02-10 00:00 GMT

Environment:Tensorflow_Tfjs_Browser_Runtime

Overview

tf.loadLayersModel is the TensorFlow.js API for loading a pretrained Keras-compatible model (topology + weights) from a URL, file path, or custom IOHandler. In the context of transfer learning, this function is the entry point for obtaining a base model whose learned representations will be reused for a new task. It deserializes the model's JSON topology and binary weight shards into a fully functional LayersModel instance.

Description

This API loads a model that was previously saved in the TensorFlow.js Layers format (or converted from a Keras/TensorFlow SavedModel). For transfer learning workflows, the loaded model serves as the base model -- the pretrained backbone from which features will be extracted.

The function accepts either a string URL/path or a custom io.IOHandler object. When given a string, the function resolves the protocol scheme (e.g., https://, indexeddb://, localstorage://, file://) to determine the appropriate loading mechanism. The model.json file at the given path contains both the model topology and a manifest of weight shard files.

Code Reference

Source file: tfjs-layers/src/models.ts (Lines 248-270)

Function Signature

tf.loadLayersModel(
  pathOrIOHandler: string | io.IOHandler,
  options?: io.LoadOptions
): Promise<LayersModel>

TypeScript Signature

export async function loadLayersModel(
  pathOrIOHandler: string | io.IOHandler,
  options?: io.LoadOptions
): Promise<LayersModel>

Parameters

Parameter Type Required Description
pathOrIOHandler io.IOHandler Yes URL/path to the model.json file, or a custom IOHandler instance. Supported schemes include https://, http://, indexeddb://, localstorage://, and file:// (Node.js).
options io.LoadOptions No Optional configuration for loading behavior, including custom weight URL converters, request options, and strict mode settings.

Return Value

Type Description
Promise<LayersModel> A promise that resolves to a fully constructed LayersModel with architecture and pretrained weights loaded. For transfer learning, this model's intermediate layers provide feature extraction capabilities.

I/O Contract

Direction Description
Inputs A URL or path pointing to a pretrained model's model.json file (e.g., MobileNet, ResNet hosted on Google Cloud Storage or locally). Optionally, an io.LoadOptions object.
Outputs A Promise<LayersModel> containing the full pretrained architecture and weights. For transfer learning, the loaded model's intermediate layers will be accessed via getLayer() to extract features.
Side Effects Network requests to fetch model topology and weight shard files. Model is registered in the TensorFlow.js runtime.
Errors Throws if the URL is unreachable, the JSON is malformed, weight shards are missing, or the model topology contains unsupported layer types.

Usage Examples

Example 1: Load MobileNet as Base Model for Transfer Learning

// Load MobileNet as base model for transfer learning
const baseModel = await tf.loadLayersModel(
  'https://storage.googleapis.com/tfjs-models/tfjs/mobilenet_v1_0.25_224/model.json'
);
console.log('Base model layers:', baseModel.layers.length);
baseModel.summary();

Example 2: Load a Locally Saved Pretrained Model (Node.js)

// Load a pretrained model from the local filesystem
const baseModel = await tf.loadLayersModel(
  'file://./pretrained-models/mobilenet/model.json'
);

// Inspect the model architecture for transfer learning
baseModel.layers.forEach((layer, i) => {
  console.log(`Layer ${i}: ${layer.name} -> ${JSON.stringify(layer.outputShape)}`);
});

Example 3: Load from IndexedDB (Previously Cached in Browser)

// Load a pretrained model previously saved to IndexedDB
const baseModel = await tf.loadLayersModel('indexeddb://mobilenet-base');

// Confirm the model is loaded and ready for transfer learning
console.log('Input shape:', baseModel.inputShape);
console.log('Output shape:', baseModel.outputShape);
console.log('Total params:', baseModel.countParams());

Example 4: Load with Custom Options

// Load with custom fetch options (e.g., authentication headers)
const baseModel = await tf.loadLayersModel(
  'https://my-model-server.com/models/resnet50/model.json',
  {
    requestInit: {
      headers: { 'Authorization': 'Bearer my-token' }
    },
    strict: true  // Ensure all weights are loaded
  }
);

Usage

In a transfer learning pipeline, tf.loadLayersModel is always the first step. The returned LayersModel is then used to:

  1. Inspect the architecture -- List layers and their output shapes to identify suitable feature extraction points.
  2. Select a feature extraction layer -- Use getLayer() to access an intermediate layer's output.
  3. Freeze layers -- Set layer.trainable = false on base model layers.
  4. Build a new model -- Use tf.model() to create a new model connecting the base model's input to the feature layer and then to a new task-specific head.

Related Pages

Environments

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment