Principle:Tencent Ncnn Vulkan GPU Detection
| Knowledge Sources | |
|---|---|
| Domains | GPU_Computing, Device_Management |
| Last Updated | 2026-02-09 00:00 GMT |
Overview
Process of initializing the Vulkan runtime, enumerating available GPU devices, and selecting the optimal device for compute inference.
Description
Vulkan GPU detection is the initialization phase for GPU-accelerated inference. It creates a Vulkan instance (the root API object), enumerates all physical GPU devices on the system, queries their capabilities (memory size, compute features, fp16/int8 support, cooperative matrix support), and selects the appropriate device for inference.
On multi-GPU systems, the selection can be automatic (ncnn picks the default) or explicit (user specifies a device index). The GpuInfo structure exposes per-device capabilities that influence inference strategy: fp16 arithmetic support, memory properties, maximum workgroup sizes, and whether hardware tensor cores (cooperative matrix) are available.
Usage
Use at application startup before loading any models. The Vulkan instance must be created once and destroyed at shutdown. GPU detection results inform which optimizations are available (fp16, cooperative matrix, etc.).
Theoretical Basis
GPU initialization flow:
1. create_gpu_instance()
├── Load Vulkan driver (or simplevk)
├── Create VkInstance
├── Enumerate physical devices
└── Query per-device capabilities
2. get_gpu_count() → N devices available
3. get_gpu_info(index)
├── Device name, vendor, driver version
├── Memory properties (heap sizes, types)
├── Compute capabilities (fp16, int8, cooperative matrix)
└── Limits (max workgroup size, etc.)
4. net.set_vulkan_device(index)
└── Bind network to specific GPU