Implementation:Pytorch Serve Generate Model Archive
Appearance
Overview
generate_model_archive is the top-level function in the TorchServe model archiver that creates model archive (.mar) files. It generates a manifest from the provided configuration, validates inputs, copies all artifacts to a temporary directory, and produces the final archive. The companion ModelArchiverConfig dataclass provides a type-safe, programmatic way to specify all archive parameters.
| Field | Value |
|---|---|
| Implementation Name | Generate Model Archive |
| Type | API Doc |
| Workflow | Model_Deployment |
| Domains | Model_Packaging, DevOps |
| Knowledge Sources | TorchServe |
| Last Updated | 2026-02-13 00:00 GMT |
Description
The model archiving system consists of two main components:
generate_model_archive(config): The public entry point that orchestrates the entire archiving process. It generates the manifest JSON and delegates topackage_model().package_model(config, manifest): The internal function that performs the actual packaging: validating inputs, checking for existing archives, copying artifacts to a temp directory, and creating the final archive.
Archiving Pipeline
The archiving process follows these steps:
- Parse configuration: If no config is provided, parse CLI arguments via
ArgParser.export_model_args_parser(). - Generate manifest: Create the
MANIFEST.jsoncontent from the config. - Validate inputs: Check that the model name is valid and the export path exists.
- Check for existing archive: Verify no
.marfile already exists (unless--forceis set). - Copy artifacts: Copy model file, serialized file, handler, extra files, requirements, and config to a temp directory.
- Create archive: Zip the temp directory into the final
.mar,.tar.gz, or flat directory. - Cleanup: Remove the temporary directory.
Usage
from model_archiver import ModelArchiverConfig
from model_archiver.model_packaging import generate_model_archive
Code Reference
Source Location
| File | Lines | Repository |
|---|---|---|
model-archiver/model_archiver/model_packaging.py |
L15-73 | pytorch/serve |
model-archiver/model_archiver/model_archiver_config.py |
L1-29 | pytorch/serve |
Signature
def generate_model_archive(config: Optional[ModelArchiverConfig] = None) -> None:
"""
Generate a model archive file.
If config is None, parses command-line arguments to build the config.
Generates a MANIFEST.json and packages all artifacts into an archive.
Args:
config (Optional[ModelArchiverConfig]): Archive configuration.
If None, CLI arguments are parsed automatically.
Returns:
None
Raises:
ModelArchiverError: If validation fails or archiving encounters an error.
"""
...
def package_model(config: ModelArchiverConfig, manifest: str) -> None:
"""
Internal helper that performs the actual packaging.
Args:
config (ModelArchiverConfig): Archive configuration dataclass.
manifest (str): JSON string of the generated manifest.
Returns:
None
Raises:
ModelArchiverError: If validation, file copy, or archive creation fails.
"""
...
ModelArchiverConfig Dataclass
from dataclasses import dataclass
from typing import Literal, Optional
@dataclass
class ModelArchiverConfig:
model_name: str # Required: Name of the model
handler: str # Required: Handler file or built-in name
version: str # Required: Model version string
serialized_file: Optional[str] = None # Path to serialized model weights
model_file: Optional[str] = None # Path to model class definition (.py)
extra_files: Optional[str] = None # Comma-separated list of extra files
runtime: str = "python" # Runtime type (default: "python")
export_path: str = os.getcwd() # Output directory for the archive
archive_format: Literal["default", "tgz", "no-archive"] = "default" # Archive format
force: bool = False # Overwrite existing archive
requirements_file: Optional[str] = None # Path to requirements.txt
config_file: Optional[str] = None # Path to model YAML config
Import
from model_archiver.model_packaging import generate_model_archive
from model_archiver import ModelArchiverConfig
I/O Contract
generate_model_archive
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
config |
Optional[ModelArchiverConfig] |
No | None |
If None, CLI args are parsed |
| Return | Type | Description |
|---|---|---|
| None | None |
Archives are written to disk at config.export_path
|
ModelArchiverConfig Fields
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
model_name |
str |
Yes | -- | Name used for the archive filename and registration |
handler |
str |
Yes | -- | Handler file path or built-in handler name |
version |
str |
Yes | -- | Semantic version string (e.g., "1.0") |
serialized_file |
Optional[str] |
No | None |
Path to .pt, .pth, .onnx, or .so weights
|
model_file |
Optional[str] |
No | None |
Python file with model class for eager mode |
extra_files |
Optional[str] |
No | None |
Comma-separated paths to additional files |
runtime |
str |
No | "python" |
Runtime environment type |
export_path |
str |
No | os.getcwd() |
Directory to write the archive |
archive_format |
Literal |
No | "default" |
One of "default", "tgz", "no-archive"
|
force |
bool |
No | False |
Overwrite existing archive if True |
requirements_file |
Optional[str] |
No | None |
Path to requirements.txt
|
config_file |
Optional[str] |
No | None |
Path to model YAML configuration file |
Output Artifacts
| Format | Output File | Contents |
|---|---|---|
default |
{export_path}/{model_name}.mar |
ZIP archive |
tgz |
{export_path}/{model_name}.tar.gz |
Gzipped tar archive |
no-archive |
{export_path}/{model_name}/ |
Flat directory with all artifacts |
Usage Examples
Example 1: Basic TorchScript model archiving
from model_archiver import ModelArchiverConfig
from model_archiver.model_packaging import generate_model_archive
config = ModelArchiverConfig(
model_name="squeezenet1_1",
handler="image_classifier",
version="1.0",
serialized_file="squeezenet1_1.pt",
export_path="model_store/",
)
generate_model_archive(config)
# Creates: model_store/squeezenet1_1.mar
Example 2: Full archive with config and dependencies
from model_archiver import ModelArchiverConfig
from model_archiver.model_packaging import generate_model_archive
config = ModelArchiverConfig(
model_name="bert_classifier",
handler="handler.py",
version="2.0",
model_file="model.py",
serialized_file="bert_weights.pt",
extra_files="tokenizer_config.json,vocab.txt,index_to_name.json",
config_file="model_config.yaml",
requirements_file="requirements.txt",
export_path="model_store/",
archive_format="default",
force=True,
)
generate_model_archive(config)
# Creates: model_store/bert_classifier.mar (overwrites if exists)
Example 3: CLI equivalent
torch-model-archiver \
--model-name bert_classifier \
--version 2.0 \
--model-file model.py \
--serialized-file bert_weights.pt \
--handler handler.py \
--extra-files "tokenizer_config.json,vocab.txt,index_to_name.json" \
--config-file model_config.yaml \
--requirements-file requirements.txt \
--export-path model_store/ \
--archive-format default \
--force
Example 4: Creating a no-archive for development
from model_archiver import ModelArchiverConfig
from model_archiver.model_packaging import generate_model_archive
config = ModelArchiverConfig(
model_name="debug_model",
handler="handler.py",
version="0.1",
serialized_file="model.pt",
export_path="/tmp/",
archive_format="no-archive",
)
generate_model_archive(config)
# Creates: /tmp/debug_model/ (flat directory, no archive)
Related Pages
- Principle:Pytorch_Serve_Model_Archiving - The model archiving principle this function implements
- Implementation:Pytorch_Serve_Get_Yaml_Config - Reads the config file that gets bundled into archives
- Implementation:Pytorch_Serve_Management_API - Registers archived models on a running server
- Implementation:Pytorch_Serve_BaseHandler - Handlers that are packaged inside archives
Page Connections
Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment