Model Compression

Pruning

Introduced by Li et al. in Pruning Filters for Efficient ConvNets

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Network Pruning 42 7.59%
Quantization 38 6.87%
Model Compression 32 5.79%
Language Modelling 30 5.42%
Image Classification 23 4.16%
Federated Learning 21 3.80%
Computational Efficiency 17 3.07%
Retrieval 10 1.81%
Large Language Model 10 1.81%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories