Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Tensorflow Tfjs Tf Saved Model Save

From Leeroopedia


Knowledge Sources
Domains Model_Persistence, Deployment
Principle Principle:Tensorflow_Tfjs_Python_Model_Export
Type External Tool Doc (Python TF/Keras)
Last Updated 2026-02-10 00:00 GMT

Environment:Tensorflow_Tfjs_Python_Converter

Overview

This implementation documents the concrete Python APIs for exporting trained TensorFlow and Keras models to the SavedModel or HDF5 format. These exported models serve as the input to the TensorFlow.js converter pipeline. While this is a Python API (not part of the TF.js JavaScript codebase), it is an essential prerequisite in the TF.js model deployment workflow.

API: tf.saved_model.save

Signature

tf.saved_model.save(
    obj,
    export_dir,
    signatures=None,
    options=None
)

Parameters

Parameter Type Required Description
obj tf.Module or tf.keras.Model Yes The trackable TensorFlow object to export. Must have at least one tf.function-decorated method or be a Keras model with a built graph.
export_dir string Yes The directory path where the SavedModel will be written. The directory is created if it does not exist. Existing contents are overwritten.
signatures dict or tf.function No Serving signatures mapping string keys to concrete tf.function traces. If None, defaults are inferred from __call__ or serving_default.
options tf.saved_model.SaveOptions No Advanced options including experimental settings for function tracing, variable policy, and custom saving behavior.

Output Directory Structure

export_dir/
  saved_model.pb          # Serialized MetaGraphDef protocol buffer
  variables/
    variables.index        # Index file for checkpoint shards
    variables.data-00000-of-00001  # Weight data shard(s)
  assets/                  # Optional: vocabulary files, lookup tables
  fingerprint.pb           # Model fingerprint (TF 2.12+)

Example

import tensorflow as tf

# Example 1: Export a Keras Sequential model
model = tf.keras.Sequential([
    tf.keras.layers.Dense(128, activation='relu', input_shape=(784,)),
    tf.keras.layers.Dropout(0.2),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
model.fit(x_train, y_train, epochs=5)

# Export as SavedModel
tf.saved_model.save(model, '/tmp/mnist_saved_model')
# Example 2: Export with explicit serving signature
@tf.function(input_signature=[tf.TensorSpec(shape=[None, 784], dtype=tf.float32)])
def serve_fn(x):
    return model(x, training=False)

tf.saved_model.save(
    model,
    '/tmp/mnist_saved_model_with_sig',
    signatures={'serving_default': serve_fn}
)
# Example 3: Export a custom tf.Module
class MyModule(tf.Module):
    def __init__(self):
        super().__init__()
        self.dense = tf.keras.layers.Dense(10)

    @tf.function(input_signature=[tf.TensorSpec(shape=[None, 784], dtype=tf.float32)])
    def __call__(self, x):
        return self.dense(x)

module = MyModule()
module(tf.zeros([1, 784]))  # Build the module
tf.saved_model.save(module, '/tmp/custom_module')

API: model.save() (Keras)

Signature

# SavedModel format (default in TF 2.x)
model.save('path/to/model_directory')

# HDF5 format (explicit extension)
model.save('path/to/model.h5')

# With explicit format argument
model.save('path/to/model', save_format='tf')   # SavedModel
model.save('path/to/model', save_format='h5')    # HDF5

Parameters

Parameter Type Required Description
filepath string Yes Path to save the model. If the path ends with .h5 or .keras, the corresponding format is used. Otherwise, SavedModel format is used by default.
overwrite bool No Whether to silently overwrite existing files. Default: True.
save_format string No tf for SavedModel, h5 for HDF5. Inferred from filepath extension if not specified.
include_optimizer bool No Whether to save the optimizer state. Default: True.
signatures dict No Serving signatures (SavedModel format only).

Example

import tensorflow as tf

# Build and train
model = tf.keras.applications.MobileNetV2(
    input_shape=(224, 224, 3),
    weights='imagenet'
)

# Save as SavedModel directory (recommended for tfjs_graph_model conversion)
model.save('/tmp/mobilenet_v2_saved_model')

# Save as HDF5 file (for tfjs_layers_model conversion)
model.save('/tmp/mobilenet_v2.h5')

API: model.save_weights()

Signature

model.save_weights('path/to/weights.h5')
model.save_weights('path/to/weights', save_format='tf')

This saves only the weights without the model architecture. This is useful when the model architecture is defined in code and only the weight values need to be persisted. However, the TF.js converter typically requires the full model (architecture + weights), so save_weights alone is not sufficient for TF.js conversion.

Format Comparison for TF.js Conversion

Export Method Output Format TF.js Converter Input Format TF.js Output Format Best For
tf.saved_model.save() SavedModel directory --input_format=tf_saved_model tfjs_graph_model Production models, custom ops, full graph preservation
model.save('dir/') Keras SavedModel directory --input_format=keras_saved_model tfjs_layers_model Keras models needing layers API in TF.js
model.save('f.h5') Keras HDF5 file --input_format=keras tfjs_layers_model Simple Keras models, backwards compatibility

Serving Signatures

Serving signatures define the input/output contract for the exported model. They are critical for TF.js graph model conversion because the converter uses the signature to determine which nodes in the graph are inputs and outputs.

# Define an explicit serving signature with input specifications
@tf.function(input_signature=[
    tf.TensorSpec(shape=[None, 224, 224, 3], dtype=tf.float32, name='input_image')
])
def classify(image):
    predictions = model(image, training=False)
    return {'class_probabilities': predictions}

tf.saved_model.save(
    model,
    '/tmp/model_with_signature',
    signatures={'serving_default': classify}
)

# Verify the signature
loaded = tf.saved_model.load('/tmp/model_with_signature')
print(list(loaded.signatures.keys()))
# Output: ['serving_default']
print(loaded.signatures['serving_default'].structured_input_signature)
print(loaded.signatures['serving_default'].structured_outputs)

Verification

After exporting, verify the model before running the TF.js converter:

import tensorflow as tf

# Load and inspect the SavedModel
loaded_model = tf.saved_model.load('/tmp/my_saved_model')

# Check available signatures
print("Signatures:", list(loaded_model.signatures.keys()))

# Test inference with the loaded model
infer = loaded_model.signatures['serving_default']
test_input = tf.random.normal([1, 784])
output = infer(test_input)
print("Output keys:", list(output.keys()))
print("Output shape:", output[list(output.keys())[0]].shape)
# Inspect SavedModel from command line
saved_model_cli show --dir /tmp/my_saved_model --all

Common Issues

Issue Cause Resolution
Signature not found during conversion No serving signature was defined Use tf.saved_model.save() with explicit signatures parameter
Custom op not supported Model uses a TF operation not available in TF.js Check the TF.js op support list; consider replacing unsupported ops before export
Large model size Unquantized float32 weights Apply quantization during conversion (not export), or use pruning/distillation during training
Variable dtype mismatch int64 variables in the model TF.js does not support int64; cast to int32 before export

See Also

Environments

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment