Implementation:Tencent Ncnn YOLOv4 Example
| Knowledge Sources | |
|---|---|
| Domains | Vision, Object Detection |
| Last Updated | 2026-02-09 19:00 GMT |
Overview
Concrete tool for object detection on COCO classes using YOLOv4 or YOLOv4-tiny with ncnn, supporting both image and video input with optional profiling.
Description
This example demonstrates YOLOv4 (or YOLOv4-tiny) object detection with ncnn, and is the only example in the repository that supports video input via OpenCV VideoCapture. The model variant is selected via a compile-time #define YOLOV4_TINY: YOLOv4-tiny uses 416x416 input, while full YOLOv4 uses 608x608. The model detects 80 COCO classes (person, car, dog, etc.). A separate init_yolov4 function handles model initialization, allowing the model to be loaded once and reused across video frames. The example also includes NCNN_PROFILING support via ncnn::get_current_time() for benchmarking model load time, detection time, capture time, and draw time. For video input, the detection loop runs continuously reading from a V4L2 device path.
Usage
Use this example for real-time object detection from a webcam or video device, or for single-image detection on COCO categories. It is particularly suitable for performance benchmarking and streaming inference scenarios. The YOLOv4-tiny variant is appropriate for real-time applications, while full YOLOv4 provides higher accuracy.
Code Reference
Source Location
- Repository: Tencent_Ncnn
- File: examples/yolov4.cpp
- Lines: 1-284
Signature
static int init_yolov4(ncnn::Net* yolov4, int* target_size);
static int detect_yolov4(const cv::Mat& bgr, std::vector<Object>& objects,
int target_size, ncnn::Net* yolov4);
static int draw_objects(const cv::Mat& bgr, const std::vector<Object>& objects,
int is_streaming);
int main(int argc, char** argv);
Import
#include "net.h"
#include "benchmark.h" // for NCNN_PROFILING
I/O Contract
Inputs
| Name | Type | Required | Description |
|---|---|---|---|
| device_or_image | const char* | Yes | Either a V4L2 device path (e.g., /dev/video0) for streaming or an image file path |
| bgr | cv::Mat | Yes | BGR frame from image file or video capture |
Outputs
| Name | Type | Description |
|---|---|---|
| objects | std::vector<Object> | Detected objects with label, prob, and rect for 80 COCO classes |
| Visual output | cv::imshow window | Image/video with bounding boxes and class labels (waitKey(1) for streaming, waitKey(0) for image) |
| Profiling output | stdout | Timing for model init, capture, detection, and draw (when NCNN_PROFILING is defined) |
Model Files
| File | Description |
|---|---|
| yolov4-tiny-opt.param | YOLOv4-tiny optimized parameter file (when YOLOV4_TINY defined) |
| yolov4-tiny-opt.bin | YOLOv4-tiny optimized weight file (when YOLOV4_TINY defined) |
| yolov4-opt.param | Full YOLOv4 optimized parameter file (when YOLOV4_TINY not defined) |
| yolov4-opt.bin | Full YOLOv4 optimized weight file (when YOLOV4_TINY not defined) |
Preprocessing
- Color conversion: BGR to RGB via
ncnn::Mat::PIXEL_BGR2RGB - Resize: 416x416 for YOLOv4-tiny, 608x608 for full YOLOv4
- Mean values: [0, 0, 0]
- Norm values: [1/255.0, 1/255.0, 1/255.0]
- Effect: Normalizes pixel values to [0, 1] range
Architecture
The example separates initialization from detection to support the video streaming loop:
main()
|-- init_yolov4() // Load model once
|-- [Image mode]
| |-- detect_yolov4() // Single inference
| |-- draw_objects() // Display and exit
|-- [Video mode]
|-- loop:
|-- cap >> frame
|-- detect_yolov4()
|-- draw_objects(is_streaming=1)
Usage Examples
Running with an Image
./yolov4 image.jpg
Running with a Webcam
./yolov4 /dev/video0
Key Code Pattern
ncnn::Net yolov4;
int target_size = 0;
init_yolov4(&yolov4, &target_size);
// For video streaming
cv::VideoCapture cap;
cap.open(devicepath);
while (1) {
cap >> frame;
detect_yolov4(frame, objects, target_size, &yolov4);
draw_objects(frame, objects, /*is_streaming=*/1);
}
Compile-Time Configuration
| Define | Effect |
|---|---|
YOLOV4_TINY |
Selects YOLOv4-tiny model (416x416) instead of full YOLOv4 (608x608) |
NCNN_PROFILING |
Enables timing measurements for init, capture, detection, and draw phases |