Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Tensorflow Tfjs LayersModel Save

From Leeroopedia


Overview

Tensorflow_Tfjs_LayersModel_Save documents the TensorFlow.js API for persisting a trained model's architecture and weights to various storage backends. The save() method serializes the model topology as JSON and weights as binary data, writing them to a destination specified by a URL scheme string or a custom IOHandler.

Principle:Tensorflow_Tfjs_Model_Serialization

TensorFlow.js

Deep_Learning Model_Persistence

Environment:Tensorflow_Tfjs_Browser_Runtime Environment:Tensorflow_Tfjs_Node_Native_Runtime

Type: API Doc

External Dependencies: @tensorflow/tfjs-core (io module)

API Signature

async save(
  handlerOrURL: io.IOHandler | string,
  config?: io.SaveConfig
): Promise<io.SaveResult>

SaveConfig

Parameter Type Default Description
includeOptimizer boolean false Whether to include the optimizer state in the serialized output. Set to true if you plan to resume training from the saved model.

SaveResult

Property Type Description
modelArtifactsInfo ModelArtifactsInfo Metadata about the saved artifacts
modelArtifactsInfo.dateSaved Date Timestamp of when the model was saved
modelArtifactsInfo.modelTopologyBytes number Size of the topology JSON in bytes
modelArtifactsInfo.weightSpecsBytes number Size of the weight specification JSON in bytes
modelArtifactsInfo.weightDataBytes number Size of the binary weight data in bytes

Code Reference

Source file:

  • tfjs-layers/src/engine/training.ts — Lines 2112-2166

The save method serializes the model in two phases:

  1. Topology serialization -- The model's layer graph is traversed and each layer's class name, configuration, and inbound connections are encoded into a JSON object.
  2. Weight serialization -- All trainable and non-trainable weight tensors are enumerated, their metadata (name, shape, dtype) is recorded as JSON weight specs, and their raw numerical data is concatenated into binary ArrayBuffer chunks.

The serialized artifacts are then passed to the appropriate IOHandler based on the URL scheme or the provided handler object. Built-in handlers support localStorage, IndexedDB, file system (Node.js), and HTTP uploads.

URL Schemes

URL Scheme Environment Description Size Limit
localstorage:// Browser Saves to browser localStorage ~5 MB
indexeddb:// Browser Saves to browser IndexedDB ~50 MB+ (browser-dependent)
file:// Node.js Saves to the local file system as model.json + binary weight shard files Disk space
http:// or https:// Both POSTs the model artifacts to the specified URL Server-dependent
downloads:// Browser Triggers browser file download of model files Disk space

Import

import * as tf from '@tensorflow/tfjs';

I/O Contract

Inputs

Input Type Description
Model LayersModel A trained model (does not need to be compiled for saving, but compilation info is preserved if present)
handlerOrURL string The destination for the serialized model. Either a URL string with a recognized scheme or a custom IOHandler object.
config io.SaveConfig Optional configuration controlling what is included in the serialized output

Outputs

Output Type Description
saveResult Promise<io.SaveResult> A promise that resolves with metadata about the saved model artifacts, including sizes and timestamp

Serialized Artifacts

The save operation produces the following artifacts:

Artifact Format Contents
model.json JSON Model topology, weight specifications, and training configuration
group1-shardXofY.bin Binary Weight data split into shard files (for file:// and downloads:// schemes)
Optimizer weights (optional) Binary Optimizer internal state (momentum, velocity, etc.) when includeOptimizer: true

Usage Examples

Save to Browser localStorage

const saveResult = await model.save('localstorage://my-model');
console.log('Model saved at:', saveResult.modelArtifactsInfo.dateSaved);
console.log('Topology size:', saveResult.modelArtifactsInfo.modelTopologyBytes, 'bytes');
console.log('Weight data size:', saveResult.modelArtifactsInfo.weightDataBytes, 'bytes');

Save to Browser IndexedDB

// IndexedDB supports larger models than localStorage
await model.save('indexeddb://my-model');

// Later, load it back
const loadedModel = await tf.loadLayersModel('indexeddb://my-model');

Save to File System (Node.js)

// Saves model.json and weight shard files to the specified directory
await model.save('file://./my-model');

// This creates:
//   ./my-model/model.json
//   ./my-model/weights.bin (or multiple shard files)

// Later, load it back
const loadedModel = await tf.loadLayersModel('file://./my-model/model.json');

Save to HTTP Endpoint

// POST the model to a server endpoint
await model.save('http://localhost:3000/upload-model');

// The server receives a multipart request with:
//   - model.json (topology + weight specs)
//   - Binary weight data

Include Optimizer State for Resumed Training

// Save with optimizer state to resume training later
await model.save('localstorage://my-model', {includeOptimizer: true});

// Later, load and continue training
const loadedModel = await tf.loadLayersModel('localstorage://my-model');
// Optimizer state is restored, so training resumes from where it left off
await loadedModel.fit(newTrainXs, newTrainYs, {epochs: 10});

Trigger Browser Download

// Triggers a file download in the browser
await model.save('downloads://my-model');

// The user's browser downloads:
//   my-model.json
//   my-model.weights.bin

Custom IOHandler

// Implement a custom IOHandler for specialized storage
const customHandler = {
  async save(modelArtifacts) {
    // modelArtifacts contains:
    //   modelTopology: object (JSON)
    //   weightSpecs: WeightSpec[]
    //   weightData: ArrayBuffer
    const topology = JSON.stringify(modelArtifacts.modelTopology);
    const weightData = modelArtifacts.weightData;

    // Store to your custom backend...
    await myCustomStorage.put('topology', topology);
    await myCustomStorage.put('weights', weightData);

    return {
      modelArtifactsInfo: {
        dateSaved: new Date(),
        modelTopologyBytes: topology.length,
        weightDataBytes: weightData.byteLength
      }
    };
  }
};

await model.save(customHandler);

Complete Save and Load Workflow

// Train a model
const model = tf.sequential();
model.add(tf.layers.dense({units: 64, activation: 'relu', inputShape: [10]}));
model.add(tf.layers.dense({units: 3, activation: 'softmax'}));
model.compile({optimizer: 'adam', loss: 'categoricalCrossentropy', metrics: ['accuracy']});

// ... training happens here ...

// Save the model
const saveResult = await model.save('indexeddb://trained-classifier');
console.log('Saved:', JSON.stringify(saveResult.modelArtifactsInfo));

// In a new session, load and use the model
const loadedModel = await tf.loadLayersModel('indexeddb://trained-classifier');
const prediction = loadedModel.predict(tf.tensor2d([[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]]));
prediction.print();

prediction.dispose();

Important Notes

  • The save() method is asynchronous and returns a Promise. Always use await or .then() to handle the result.
  • The model does not need to be compiled to be saved. However, if it is compiled, the compilation configuration (optimizer, loss, metrics) is included in the topology JSON and will be restored on load.
  • When using localstorage://, be aware of the ~5MB storage limit. For models larger than this, use indexeddb:// or file:// instead.
  • The includeOptimizer option can significantly increase the saved size (roughly doubling or tripling the weight data) because optimizer state tensors (e.g., first and second moment estimates for Adam) are saved alongside model weights.
  • When saving to http:// or https://, the server must accept a POST request with the model artifacts. TensorFlow.js sends the data as a JSON payload with the topology and base64-encoded weight data, or as multipart form data depending on the model size.
  • The file:// scheme is only available in Node.js environments. It is not supported in browser contexts.
  • After saving, the returned SaveResult provides byte counts that can be used to verify the save operation and estimate storage requirements.

Related Pages

Environments

2026-02-10 00:00 GMT

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment