Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Alibaba MNN Convert MNN Script

From Leeroopedia


Field Value
implementation_name Convert_MNN_Script
schema_version 0.3.0
impl_type Wrapper Doc
domain Stable Diffusion Deployment
stage Model Conversion
source_file transformers/diffusion/export/convert_mnn.py (L1-20)
external_deps MNNConvert binary (from MNN build) or pymnn Python package
last_updated 2026-02-10 14:00 GMT

Summary

This implementation is a Python wrapper script that iterates over the Stable Diffusion ONNX pipeline components and invokes MNNConvert on each one. It converts three sub-models (text_encoder, unet, vae_decoder) from ONNX to MNN format, applying user-specified optimization flags such as weight quantization and transformer fusion.

API

python convert_mnn.py <onnx_path> <mnn_output_path> [extra_flags...]

Key Parameters

Parameter Position Description
onnx_path arg1 (sys.argv[1]) Path to the ONNX model directory (output of the ONNX export step)
mnn_output_path arg2 (sys.argv[2]) Path to the output directory for MNN models
extra_flags arg3+ (sys.argv[3:]) Additional MNNConvert flags, joined with spaces (e.g., --weightQuantBits=8 --transformerFuse)

Inputs

An ONNX model directory from the ONNX export step, containing:

  • text_encoder/model.onnx
  • unet/model.onnx (with weights.pb)
  • vae_decoder/model.onnx

Outputs

An MNN model directory containing:

<mnn_output_path>/
  text_encoder.mnn       # Converted CLIP text encoder
  unet.mnn               # Converted UNet denoising network
  vae_decoder.mnn        # Converted VAE decoder

Each .mnn file may also have an associated external data file if --saveExternalData=1 was used.

Core Function Signature

def convert(onnx_path, mnn_path, extra):

This function iterates over the model list ['text_encoder', 'unet', 'vae_decoder'] and for each model constructs and executes an MNNConvert command of the form:

MNNConvert -f ONNX --modelFile <onnx_path>/<model>/model.onnx \
    --MNNModel <mnn_path>/<model>.mnn --saveExternalData=1 <extra>

Source Code

import os
def convert(onnx_path, mnn_path, extra):
    print('Onnx path: ', onnx_path)
    print('MNN path: ', mnn_path)
    print('Extra: ', extra)
    convert_path = '../../../build/MNNConvert'
    if not os.path.exists(convert_path):
        print(convert_path + " not exist, use pymnn instead")
        convert_path = 'mnnconvert'
    models = ['text_encoder', 'unet', 'vae_decoder']
    for model in models:
        cmd = convert_path + ' -f ONNX --modelFile ' + os.path.join(onnx_path, model, 'model.onnx') + \
              ' --MNNModel ' + os.path.join(mnn_path, model + '.mnn') + ' --saveExternalData=1 ' + extra
        print(cmd)
        print(os.popen(cmd).read())

if __name__ == '__main__':
    import sys
    extra = ""
    extra = " ".join(sys.argv[3:])
    convert(sys.argv[1], sys.argv[2], extra)

Usage Example

# Basic conversion without extra optimizations
python convert_mnn.py ./onnx_sd15 ./mnn_sd15

# Conversion with 8-bit weight quantization and transformer fusion
python convert_mnn.py ./onnx_sd15 ./mnn_sd15 --weightQuantBits=8 --transformerFuse

Notes

  • The script first looks for the MNNConvert binary at ../../../build/MNNConvert (relative to the script location). If not found, it falls back to the mnnconvert command from the pymnn pip package.
  • The vae_encoder is not included in the default conversion list. If image-to-image functionality is needed, it must be converted separately.
  • The --saveExternalData=1 flag is always applied, separating weight data from the graph structure for memory-mapped loading.
  • Commands are executed via os.popen(), and their output is printed to stdout for debugging.

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment