Implementation:Tencent Ncnn Extractor Input And Extract
| Knowledge Sources | |
|---|---|
| Domains | Inference, Deep_Learning |
| Last Updated | 2026-02-09 00:00 GMT |
Overview
Concrete tool for executing neural network forward passes and extracting output tensors provided by the ncnn library.
Description
The ncnn::Extractor class provides the session-based inference API in ncnn. Created via Net::create_extractor(), it manages per-inference state including input/output blob storage and memory allocators. The input method binds a preprocessed ncnn::Mat tensor to a named input blob. The extract method triggers forward computation of all layers needed to produce the requested output blob and returns the result.
Each Extractor is independent — multiple Extractors from the same Net can run concurrently on different inputs. Light mode (enabled by default) recycles intermediate blob memory for reduced peak usage. The Extractor also supports Vulkan GPU tensors (VkMat) for zero-copy GPU inference pipelines.
Usage
Use after loading a model with Net::load_param and Net::load_model. Create one Extractor per inference call. Blob names (e.g., "data", "prob") correspond to the names in the .param file.
Code Reference
Source Location
- Repository: ncnn
- File: src/net.h (declaration), src/net.cpp (implementation)
- Lines: net.h:L131 (create_extractor), net.h:L167-248 (Extractor class), net.cpp:L2067 (create_extractor impl), net.cpp:L2368-2405 (input/extract by name)
Signature
// Created from Net — not directly constructed
class Extractor
{
public:
// Enable light mode (recycle intermediate blobs)
// Enabled by default
void set_light_mode(bool enable);
// Set memory allocators
void set_blob_allocator(Allocator* allocator);
void set_workspace_allocator(Allocator* allocator);
// Set input by blob name, return 0 if success
int input(const char* blob_name, const Mat& in);
// Get result by blob name, return 0 if success
// type=0: default (convert fp16/packing)
// type=1: raw output (no conversion)
int extract(const char* blob_name, Mat& feat, int type = 0);
// Index-based variants
int input(int blob_index, const Mat& in);
int extract(int blob_index, Mat& feat, int type = 0);
// Vulkan GPU variants (when NCNN_VULKAN enabled)
int input(const char* blob_name, const VkMat& in);
int extract(const char* blob_name, VkMat& feat, VkCompute& cmd);
};
// Factory method on Net
Extractor Net::create_extractor() const;
Import
#include "net.h"
// ncnn::Extractor is in the ncnn namespace
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| blob_name | const char* | Yes | Named blob from the .param file (e.g., "data", "images") |
| in | const ncnn::Mat& | Yes | Preprocessed input tensor |
Outputs
| Name | Type | Description |
|---|---|---|
| return value | int | 0 on success, non-zero on failure |
| feat | ncnn::Mat& | Output tensor filled with inference results |
Usage Examples
Basic Classification Inference
#include "net.h"
ncnn::Net net;
net.load_param("squeezenet_v1.1.param");
net.load_model("squeezenet_v1.1.bin");
// Preprocessed input (see Mat_From_Pixels_Resize)
ncnn::Mat in = /* ... preprocessed 227x227 tensor ... */;
ncnn::Extractor ex = net.create_extractor();
ex.input("data", in);
ncnn::Mat out;
ex.extract("prob", out);
// out.w contains number of classes
// out[i] contains probability for class i
for (int i = 0; i < out.w; i++)
{
float score = out[i];
}
Multi-Output Extraction
ncnn::Extractor ex = net.create_extractor();
ex.input("images", in);
// Extract from multiple output blobs
ncnn::Mat stride8_out, stride16_out, stride32_out;
ex.extract("output0", stride8_out);
ex.extract("output1", stride16_out);
ex.extract("output2", stride32_out);