Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Implementation:Tencent Ncnn Net Load Param And Model

From Leeroopedia


Knowledge Sources
Domains Inference, Model_Deployment
Last Updated 2026-02-09 00:00 GMT

Overview

Concrete tool for deserializing neural network topology and weights into an ncnn runtime provided by the ncnn library.

Description

The ncnn::Net class is the central runtime container for a neural network in ncnn. The load_param method parses the network structure from a .param file (plain-text or binary format), instantiating each layer and resolving blob connectivity. The load_model method reads the weight data from a .bin file and populates each layer's internal weight storage. Together they produce a fully initialized Net object ready to create Extractor instances for inference.

Multiple overloads support loading from file paths, FILE pointers, in-memory buffers (with zero-copy weight referencing), and Android asset managers. The Option object on net.opt must be configured before calling load_param, as it controls runtime behaviors like Vulkan compute, int8 inference, and SIMD packing.

Usage

Use this API at the start of every ncnn inference pipeline. Call load_param first, then load_model. Both must succeed (return 0) before creating an Extractor. Typical use cases include loading models converted via PNNX, ncnnoptimize, or ncnn2int8.

Code Reference

Source Location

  • Repository: ncnn
  • File: src/net.h (declaration), src/net.cpp (implementation)
  • Lines: net.h:L27-164 (Net class), net.cpp:L1747-1758 (load_param by path), net.cpp:L1820-1855 (load_model by path), net.cpp:L996-1288 (load_param core parser)

Signature

class Net
{
public:
    Net();
    virtual ~Net();

    // option must be configured before loading
    Option opt;

    // Load network structure from plain param file
    // return 0 if success
    int load_param(const char* protopath);

    // Load network structure from binary param file
    int load_param_bin(const char* protopath);

    // Load network weight data from model file
    // return 0 if success
    int load_model(const char* modelpath);

    // Load from FILE*
    int load_param(FILE* fp);
    int load_model(FILE* fp);

    // Load from in-memory buffer (zero-copy for weights)
    // memory pointer must be 32-bit aligned
    // return bytes consumed
    size_t load_param(const unsigned char* mem);
    size_t load_model(const unsigned char* mem);

    // Load from DataReader abstraction
    int load_param(const DataReader& dr);
    int load_model(const DataReader& dr);

    // Android asset manager overloads
    int load_param(AAssetManager* mgr, const char* assetpath);
    int load_model(AAssetManager* mgr, const char* assetpath);

    // Construct an Extractor from loaded network
    Extractor create_extractor() const;

    // Custom layer registration
    int register_custom_layer(const char* type, layer_creator_func creator,
                              layer_destroyer_func destroyer = 0, void* userdata = 0);
};

Import

#include "net.h"
// ncnn::Net is in the ncnn namespace

I/O Contract

Inputs

Name Type Required Description
protopath const char* Yes File path to the .param file (network topology)
modelpath const char* Yes File path to the .bin file (network weights)
opt ncnn::Option No Inference options (Vulkan, int8, threading) — set before loading

Outputs

Name Type Description
return value int 0 on success, non-zero on failure
Net object ncnn::Net Fully initialized network with layers and weights loaded
create_extractor() ncnn::Extractor Inference session created from the loaded Net

Usage Examples

Basic File Loading

#include "net.h"

ncnn::Net net;

// Optional: configure options before loading
net.opt.use_vulkan_compute = true;
net.opt.num_threads = 4;

// Load topology then weights
if (net.load_param("model.ncnn.param"))
    return -1;  // failed
if (net.load_model("model.ncnn.bin"))
    return -1;  // failed

// Network is ready — create an extractor for inference
ncnn::Extractor ex = net.create_extractor();

In-Memory Loading (Embedded Models)

#include "net.h"

// Models compiled into the binary via ncnn2mem
extern const unsigned char model_param[];
extern const unsigned char model_bin[];

ncnn::Net net;
net.load_param(model_param);
net.load_model(model_bin);

Android Asset Loading

#include "net.h"

ncnn::Net net;
net.load_param(mgr, "model.ncnn.param");
net.load_model(mgr, "model.ncnn.bin");

Related Pages

Implements Principle

Requires Environment

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment