Browse > Methodology > Network Pruning

Network Pruning

14 papers with code · Methodology

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

Rethinking the Value of Network Pruning

ICLR 2019 Eric-mingjie/rethinking-network-pruning

Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm.

NETWORK PRUNING NEURAL ARCHITECTURE SEARCH

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

ICLR 2019 google-research/lottery-ticket-hypothesis

Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations.

NETWORK PRUNING

PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

CVPR 2018 arunmallya/packnet

This paper presents a method for adding multiple tasks to a single deep neural network while avoiding catastrophic forgetting.

NETWORK PRUNING

Efficient Sparse-Winograd Convolutional Neural Networks

ICLR 2018 xingyul/Sparse-Winograd-CNN

First, we move the ReLU operation into the Winograd domain to increase the sparsity of the transformed activations.

NETWORK PRUNING

A Closer Look at Structured Pruning for Neural Network Compression

10 Oct 2018BayesWatch/pytorch-prunes

Structured pruning is a popular method for compressing a neural network: given a large trained network, one alternates between removing channel connections and fine-tuning; reducing the overall width of the network.

NETWORK PRUNING NEURAL NETWORK COMPRESSION

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis

15 May 2019alecwangcq/EigenDamage-Pytorch

Reducing the test time resource requirements of a neural network while preserving test accuracy is crucial for running inference on resource-constrained devices.

NETWORK PRUNING

Fast Convex Pruning of Deep Neural Networks

17 Jun 2018DNNToolBox/Net-Trim-v1

We develop a fast, tractable technique called Net-Trim for simplifying a trained neural network.

NETWORK PRUNING

Attention-Based Guided Structured Sparsity of Deep Neural Networks

13 Feb 2018astorfi/attention-guided-sparsity

Network pruning is aimed at imposing sparsity in a neural network architecture by increasing the portion of zero-valued weights for reducing its size regarding energy-efficiency consideration and increasing evaluation speed.

NETWORK PRUNING

Importance Estimation for Neural Network Pruning

CVPR 2019 NVlabs/Taylor_pruning

On ResNet-101, we achieve a 40% FLOPS reduction by removing 30% of the parameters, with a loss of 0. 02% in the top-1 accuracy on ImageNet.

NETWORK PRUNING

On-Device Neural Language Model Based Word Prediction

COLING 2018 meinwerk/WordPrediction

Recent developments in deep learning with application to language modeling have led to success in tasks of text processing, summarizing and machine translation.

LANGUAGE MODELLING MACHINE TRANSLATION MODEL COMPRESSION NETWORK PRUNING SPEECH RECOGNITION