Jump to content

Connect Leeroopedia MCP: Equip your AI agents to search best practices, build plans, verify code, diagnose failures, and look up hyperparameter defaults.

Principle:VainF Torch Pruning Model Complexity Profiling

From Leeroopedia


Metadata

Field Value
Paper DepGraph
Domains Deep_Learning, Model_Analysis, Pruning
Last Updated 2026-02-08 00:00 GMT

Overview

Quantitative measurement of model computational cost (FLOPs) and memory footprint (parameters) to evaluate pruning effectiveness.

Description

Model complexity profiling measures two key metrics: FLOPs (floating-point operations per inference) and parameter count. These are computed by hooking into the model's forward pass to count operations per layer. This is essential for pruning because it provides the objective measure of compression achieved. Comparing before/after profiles validates that pruning actually reduced computational cost. The profiling uses a deep copy of the model to avoid side effects.

Usage

Use before and after pruning to measure compression ratio. Also used during progressive pruning to check if the target speedup has been achieved.

Theoretical Basis

For each layer type, FLOPs are computed analytically:

  • Conv2d FLOPs = 2 * C_in * C_out * K_h * K_w * H_out * W_out / groups
  • Linear FLOPs = 2 * in_features * out_features
  • Total parameters = Σ numel(param) for all trainable parameters
  • Speedup ratio = FLOPs_original / FLOPs_pruned

Related Pages

Page Connections

Double-click a node to navigate. Hold to expand connections.
Principle
Implementation
Heuristic
Environment