Browse > Methodology > Network Pruning

Network Pruning

21 papers with code · Methodology

State-of-the-art leaderboards

Latest papers with code

FNNP: Fast Neural Network Pruning Using Adaptive Batch Normalization

ICLR 2020 anonymous47823493/FNNP

In the experiments of pruning MobileNet V1 and ResNet-50, FNNP outperforms all compared methods by up to 3. 8%.

NETWORK PRUNING

7
01 Jan 2020

Importance Estimation for Neural Network Pruning

CVPR 2019 NVlabs/Taylor_pruning

On ResNet-101, we achieve a 40% FLOPS reduction by removing 30% of the parameters, with a loss of 0. 02% in the top-1 accuracy on ImageNet.

NETWORK PRUNING

132
25 Jun 2019

Learning Sparse Networks Using Targeted Dropout

31 May 2019for-ai/TD

Before computing the gradients for each weight update, targeted dropout stochastically selects a set of units or weights to be dropped using a simple self-reinforcing sparsity criterion and then computes the gradients for the remaining weights.

NETWORK PRUNING NEURAL NETWORK COMPRESSION

221
31 May 2019

Network Pruning via Transformable Architecture Search

23 May 2019D-X-Y/NAS-Projects

The maximum probability for the size in each distribution serves as the width and depth of the pruned network, whose parameters are learned by knowledge transfer, e. g., knowledge distillation, from the original networks.

NETWORK PRUNING NEURAL ARCHITECTURE SEARCH TRANSFER LEARNING

392
23 May 2019

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis

15 May 2019alecwangcq/EigenDamage-Pytorch

Reducing the test time resource requirements of a neural network while preserving test accuracy is crucial for running inference on resource-constrained devices.

NETWORK PRUNING

86
15 May 2019

Towards Learning of Filter-Level Heterogeneous Compression of Convolutional Neural Networks

22 Apr 2019yochaiz/Slimmable

While mainstream deep learning methods train the neural networks weights while keeping the network architecture fixed, the emerging neural architecture search (NAS) techniques make the latter also amenable to training.

NETWORK PRUNING NEURAL ARCHITECTURE SEARCH QUANTIZATION

6
22 Apr 2019

Progressive Stochastic Binarization of Deep Networks

3 Apr 2019JGU-VC/progressive_stochastic_binarization

By focusing computational attention using progressive sampling, we reduce inference costs on ImageNet further by a factor of up to 33% (before network pruning).

NETWORK PRUNING QUANTIZATION

2
03 Apr 2019

Adversarial Robustness vs Model Compression, or Both?

29 Mar 2019yeshaokai/Robustness-Aware-Pruning-ADMM

Furthermore, this work studies two hypotheses about weight pruning in the conventional setting and finds that weight pruning is essential for reducing the network model size in the adversarial setting, training a small model from scratch even with inherited initialization from the large model cannot achieve both adversarial robustness and high standard accuracy.

MODEL COMPRESSION NETWORK PRUNING

18
29 Mar 2019

FastDepth: Fast Monocular Depth Estimation on Embedded Systems

8 Mar 2019dwofk/fast-depth

In this paper, we address the problem of fast depth estimation on embedded systems.

MONOCULAR DEPTH ESTIMATION NETWORK PRUNING

227
08 Mar 2019

Neural Rejuvenation: Improving Deep Network Training by Enhancing Computational Resource Utilization

CVPR 2019 joe-siyuan-qiao/NeuralRejuvenation-CVPR19

By simply replacing standard optimizers with Neural Rejuvenation, we are able to improve the performances of neural networks by a very large margin while using similar training efforts and maintaining their original resource usages.

NETWORK PRUNING NEURAL ARCHITECTURE SEARCH

30
02 Dec 2018