Principle:Tensorflow Serving SavedModel Export
| Knowledge Sources | |
|---|---|
| Domains | Model_Serialization, Deployment |
| Last Updated | 2026-02-13 17:00 GMT |
Overview
A serialization mechanism that persists a trained TensorFlow model's computation graph, weights, and serving signatures to a standardized directory format.
Description
SavedModel is TensorFlow's canonical serialization format for deploying trained models. It packages the complete computation graph (as a MetaGraphDef protobuf), trained variable values (checkpoint format), and SignatureDefs into a self-contained directory structure. This format decouples training from serving: a model trained in any environment can be loaded by TensorFlow Serving without access to the original training code.
The SavedModel directory layout:
- saved_model.pb — Serialized MetaGraphDef with SignatureDefs
- variables/ — Checkpoint files with trained weights
- assets/ (optional) — External files (vocabularies, etc.)
- assets.extra/ (optional) — Warmup requests and other serving metadata
Usage
Use SavedModel export after training completes and signatures are defined. This is the required format for TensorFlow Serving. The export path must follow the versioned directory convention (<base_path>/<version_number>/) for the filesystem source to discover and serve the model.
Theoretical Basis
The export process follows three steps:
# Abstract export algorithm (NOT real implementation)
builder = create_saved_model_builder(export_path)
builder.add_metagraph(
session=trained_session,
tags=["serve"],
signatures={"default": classification_sig, "predict_images": prediction_sig},
main_op=table_initializer
)
builder.serialize_to_disk()
Tags identify which MetaGraph to load at serving time. The standard tag for serving is "serve" (tf.saved_model.SERVING).
Versioned paths enable automatic version management: TensorFlow Serving's filesystem source polls the base path and discovers subdirectories named with integer version numbers.