Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Tensorflow Tfjs Tensorflowjs Converter CLI

From Leeroopedia


Knowledge Sources
Domains Model_Persistence, Deployment
Principle Principle:Tensorflow_Tfjs_Model_Format_Conversion
Type External Tool Doc
Last Updated 2026-02-10 00:00 GMT

Environment:Tensorflow_Tfjs_Python_Converter

Overview

This implementation documents the tensorflowjs_converter command-line interface, the primary tool for converting Python TensorFlow models into TensorFlow.js format. The CLI reads models from various source formats and produces the model.json manifest plus .bin weight shard files that TensorFlow.js can load in the browser or Node.js.

Source Reference

tfjs-converter/README.md:L87-171 — The CLI usage, parameter definitions, and conversion examples.

CLI Syntax

tensorflowjs_converter \
    --input_format=<INPUT_FORMAT> \
    --output_format=<OUTPUT_FORMAT> \
    [OPTIONS] \
    <INPUT_PATH> \
    <OUTPUT_PATH>

Parameters

Required Parameters

Parameter Description Values
<INPUT_PATH> Path to the source model (directory, file, or URL) File path or URL string
<OUTPUT_PATH> Directory where the converted model will be written Directory path (created if it does not exist)
--input_format The format of the source model tf_saved_model, keras, keras_saved_model, tf_hub, tfjs_layers_model, tf_frozen_model

Output Format

Parameter Description Values Default
--output_format The target TF.js format tfjs_graph_model, tfjs_layers_model Depends on input format

SavedModel-Specific Parameters

Parameter Description Default
--signature_name The serving signature to export from the SavedModel serving_default
--saved_model_tags Comma-separated MetaGraphDef tags serve

Frozen Graph Parameters

Parameter Description Default
--output_node_names Comma-separated list of output node names Required for frozen graphs

Quantization Parameters

Parameter Weight Precision Size Reduction Accuracy Impact
--quantize_float16 float32 to float16 ~50% Minimal
--quantize_uint8 float32 to uint8 ~75% Low to moderate
--quantize_uint16 float32 to uint16 ~50% Low

Other Parameters

Parameter Description Default
--weight_shard_size_bytes Maximum byte size of each weight shard file 4194304 (4 MB)
--control_flow_v2 Enable TF Control Flow V2 ops True
--strip_debug_ops Remove Assert and debug operations from the graph True
--skip_op_check Skip the unsupported op check (use with caution) False
--metadata Custom metadata JSON string to embed in model.json None

Inputs and Outputs

Inputs

  • TensorFlow SavedModel directory (containing saved_model.pb + variables/)
  • Keras HDF5 file (.h5 extension)
  • Keras SavedModel directory (Keras model saved in SavedModel format)
  • TF Hub module URL (https://tfhub.dev/...)
  • TF Frozen Graph (.pb file with frozen weights)
  • TF.js Layers Model directory (for reverse conversion or format change)

Outputs

The converter writes the following files to <OUTPUT_PATH>:

  • model.json: The model manifest containing:
    • modelTopology: The computation graph (graph model) or Keras layer config (layers model)
    • weightsManifest: Array of weight group descriptors with shard filenames and tensor metadata
    • format: Model format identifier
    • generatedBy: Converter version string
    • convertedBy: TF.js version string
  • group1-shard1ofN.bin, group1-shard2ofN.bin, ...: Binary weight shard files

Conversion Examples

SavedModel to Graph Model

# Convert a TensorFlow SavedModel to TF.js graph model
tensorflowjs_converter \
    --input_format=tf_saved_model \
    --output_format=tfjs_graph_model \
    --signature_name=serving_default \
    --saved_model_tags=serve \
    /path/to/saved_model \
    /path/to/tfjs_output

SavedModel to Graph Model with Quantization

# Convert with uint8 quantization for ~75% size reduction
tensorflowjs_converter \
    --input_format=tf_saved_model \
    --output_format=tfjs_graph_model \
    --quantize_uint8 \
    /path/to/saved_model \
    /path/to/tfjs_output_quantized

Keras HDF5 to Layers Model

# Convert a Keras HDF5 model to TF.js layers model
tensorflowjs_converter \
    --input_format=keras \
    --output_format=tfjs_layers_model \
    /path/to/model.h5 \
    /path/to/tfjs_output

Keras SavedModel to Layers Model

# Convert a Keras model saved in SavedModel format
tensorflowjs_converter \
    --input_format=keras_saved_model \
    --output_format=tfjs_layers_model \
    /path/to/keras_saved_model \
    /path/to/tfjs_output

TF Hub Module to Graph Model

# Convert a TF Hub module directly from URL
tensorflowjs_converter \
    --input_format=tf_hub \
    --output_format=tfjs_graph_model \
    'https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/classification/5' \
    /path/to/tfjs_output

Frozen Graph to Graph Model

# Convert a frozen graph with explicit output nodes
tensorflowjs_converter \
    --input_format=tf_frozen_model \
    --output_format=tfjs_graph_model \
    --output_node_names='softmax_output' \
    /path/to/frozen_model.pb \
    /path/to/tfjs_output

Float16 Quantization with Custom Shard Size

# Convert with float16 quantization and 1MB shards
tensorflowjs_converter \
    --input_format=tf_saved_model \
    --output_format=tfjs_graph_model \
    --quantize_float16 \
    --weight_shard_size_bytes=1048576 \
    /path/to/saved_model \
    /path/to/tfjs_output

Reverse Conversion: TF.js to Keras

# Convert a TF.js layers model back to Keras HDF5
tensorflowjs_converter \
    --input_format=tfjs_layers_model \
    --output_format=keras \
    /path/to/tfjs_model/model.json \
    /path/to/keras_output.h5

Output Verification

After conversion, verify the output files:

# List the converted output files
ls -la /path/to/tfjs_output/
# Expected: model.json, group1-shard1ofN.bin, ...

# Check model.json structure
python -c "
import json
with open('/path/to/tfjs_output/model.json') as f:
    model = json.load(f)
print('Format:', model.get('format'))
print('Generated by:', model.get('generatedBy'))
print('Weight groups:', len(model.get('weightsManifest', [])))
for group in model.get('weightsManifest', []):
    print('  Shards:', group.get('paths'))
    print('  Weights:', len(group.get('weights', [])))
"

Conversion Format Matrix

Source Format --input_format --output_format Typical Use Case
TF SavedModel tf_saved_model tfjs_graph_model Production inference-only deployment
TF SavedModel tf_saved_model tfjs_layers_model When layers API is needed in JS
Keras SavedModel keras_saved_model tfjs_layers_model Keras models with fine-tuning support
Keras HDF5 keras tfjs_layers_model Legacy Keras model files
TF Hub tf_hub tfjs_graph_model Pre-trained models from TF Hub
Frozen Graph tf_frozen_model tfjs_graph_model Legacy TF 1.x frozen models
TF.js Layers tfjs_layers_model keras Export back to Python Keras
TF.js Layers tfjs_layers_model keras_saved_model Export back to Python SavedModel

Common Issues

Issue Cause Resolution
Unsupported op: OpName The model uses a TensorFlow operation not implemented in TF.js Check the supported ops list; consider rewriting the model to avoid unsupported ops, or use --skip_op_check (inference will fail at runtime for those ops)
No signature found SavedModel has no serving signature Export the model with explicit signatures; see Implementation:Tensorflow_Tfjs_Tf_Saved_Model_Save
Memory error during conversion Very large model exceeds system memory Increase system swap space; use a machine with more RAM
Output node not found Incorrect --output_node_names for frozen graph Inspect the graph with saved_model_cli or TensorBoard to find correct node names
Weight shard too large Default 4MB shard size may be too large or small Adjust --weight_shard_size_bytes for your deployment needs

See Also

Environments

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment