Model Compression

Pruning

Introduced by Li et al. in Pruning Filters for Efficient ConvNets

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Quantization 47 7.51%
Language Modelling 38 6.07%
Model Compression 32 5.11%
Network Pruning 28 4.47%
Large Language Model 22 3.51%
Federated Learning 22 3.51%
Image Classification 17 2.72%
Computational Efficiency 14 2.24%
Image Generation 10 1.60%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories