Browse > Methodology > Network Pruning

Network Pruning

21 papers with code · Methodology

State-of-the-art leaderboards

Greatest papers with code

Rethinking the Value of Network Pruning

ICLR 2019 Eric-mingjie/rethinking-network-pruning

Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm.

NETWORK PRUNING NEURAL ARCHITECTURE SEARCH

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

ICLR 2019 google-research/lottery-ticket-hypothesis

Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations.

NETWORK PRUNING

Network Pruning via Transformable Architecture Search

23 May 2019D-X-Y/GDAS

The maximum probability for the size in each distribution serves as the width and depth of the pruned network, whose parameters are learned by knowledge transfer, e. g., knowledge distillation, from the original networks.

NETWORK PRUNING NEURAL ARCHITECTURE SEARCH TRANSFER LEARNING

FastDepth: Fast Monocular Depth Estimation on Embedded Systems

8 Mar 2019dwofk/fast-depth

In this paper, we address the problem of fast depth estimation on embedded systems.

MONOCULAR DEPTH ESTIMATION NETWORK PRUNING

Learning Sparse Networks Using Targeted Dropout

31 May 2019for-ai/TD

Before computing the gradients for each weight update, targeted dropout stochastically selects a set of units or weights to be dropped using a simple self-reinforcing sparsity criterion and then computes the gradients for the remaining weights.

NETWORK PRUNING NEURAL NETWORK COMPRESSION

PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

CVPR 2018 arunmallya/packnet

This paper presents a method for adding multiple tasks to a single deep neural network while avoiding catastrophic forgetting.

NETWORK PRUNING

Importance Estimation for Neural Network Pruning

CVPR 2019 NVlabs/Taylor_pruning

On ResNet-101, we achieve a 40% FLOPS reduction by removing 30% of the parameters, with a loss of 0. 02% in the top-1 accuracy on ImageNet.

NETWORK PRUNING

Efficient Sparse-Winograd Convolutional Neural Networks

ICLR 2018 xingyul/Sparse-Winograd-CNN

First, we move the ReLU operation into the Winograd domain to increase the sparsity of the transformed activations.

NETWORK PRUNING

A Closer Look at Structured Pruning for Neural Network Compression

10 Oct 2018BayesWatch/pytorch-prunes

Structured pruning is a popular method for compressing a neural network: given a large trained network, one alternates between removing channel connections and fine-tuning; reducing the overall width of the network.

NETWORK PRUNING NEURAL NETWORK COMPRESSION

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis

15 May 2019alecwangcq/EigenDamage-Pytorch

Reducing the test time resource requirements of a neural network while preserving test accuracy is crucial for running inference on resource-constrained devices.

NETWORK PRUNING