About

Network Pruning is a popular approach to reduce a heavy network to obtain a light-weight form by removing redundancy in the heavy network. In this approach, a complex over-parameterized network is first trained, then pruned based on come criterions, and finally fine-tuned to achieve comparable performance with reduced parameters.

Source: Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

Benchmarks

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

Datasets

Greatest papers with code

Movement Pruning: Adaptive Sparsity by Fine-Tuning

NeurIPS 2020 huggingface/transformers

Magnitude pruning is a widely used strategy for reducing model size in pure supervised learning; however, it is less effective in the transfer learning regime that has become standard for state-of-the-art natural language processing applications.

NETWORK PRUNING TRANSFER LEARNING

What Do Compressed Deep Neural Networks Forget?

13 Nov 2019google-research/google-research

However, this measure of performance conceals significant differences in how different classes and images are impacted by model compression techniques.

FAIRNESS INTERPRETABILITY TECHNIQUES FOR DEEP LEARNING MODEL COMPRESSION NETWORK PRUNING OUTLIER DETECTION QUANTIZATION

Network Pruning via Transformable Architecture Search

NeurIPS 2019 D-X-Y/NAS-Projects

The maximum probability for the size in each distribution serves as the width and depth of the pruned network, whose parameters are learned by knowledge transfer, e. g., knowledge distillation, from the original networks.

KNOWLEDGE DISTILLATION NETWORK PRUNING NEURAL ARCHITECTURE SEARCH TRANSFER LEARNING

Rethinking the Value of Network Pruning

ICLR 2019 Eric-mingjie/rethinking-network-pruning

Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm.

NETWORK PRUNING NEURAL ARCHITECTURE SEARCH

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

ICLR 2019 google-research/lottery-ticket-hypothesis

Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations.

NETWORK PRUNING

FastDepth: Fast Monocular Depth Estimation on Embedded Systems

8 Mar 2019dwofk/fast-depth

In this paper, we address the problem of fast depth estimation on embedded systems.

MONOCULAR DEPTH ESTIMATION NETWORK PRUNING

Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion

CVPR 2020 NVlabs/DeepInversion

We introduce DeepInversion, a new method for synthesizing images from the image distribution used to train a deep neural network.

CONTINUAL LEARNING NETWORK PRUNING TRANSFER LEARNING

Learning Sparse Networks Using Targeted Dropout

31 May 2019for-ai/TD

Before computing the gradients for each weight update, targeted dropout stochastically selects a set of units or weights to be dropped using a simple self-reinforcing sparsity criterion and then computes the gradients for the remaining weights.

NETWORK PRUNING NEURAL NETWORK COMPRESSION

What is the State of Neural Network Pruning?

6 Mar 2020jjgo/shrinkbench

Neural network pruning---the task of reducing the size of a network by removing parameters---has been the subject of a great deal of work in recent years.

NETWORK PRUNING

EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning

ECCV 2020 anonymous47823493/EagleEye

Many algorithms try to predict model performance of the pruned sub-nets by introducing various evaluation methods.

NETWORK PRUNING