Browse > Methodology > Network Pruning

Network Pruning

43 papers with code ยท Methodology

Leaderboards

Latest papers without code

A Feature-map Discriminant Perspective for Pruning Deep Neural Networks

28 May 2020

Network pruning has become the de facto tool to accelerate deep neural networks for mobile and edge applications.

NETWORK PRUNING QUANTIZATION

Bayesian Neural Networks at Scale: A Performance Analysis and Pruning Study

23 May 2020

This analysis of training a BNN at scale outlines the limitations and benefits compared to a conventional neural network.

NETWORK PRUNING

Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers

ICLR 2020

We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds.

NETWORK PRUNING

Artificial Neural Network Pruning to Extract Knowledge

13 May 2020

Artificial Neural Networks (NN) are widely used for solving complex problems from medical diagnostics to face recognition.

FACE RECOGNITION NETWORK PRUNING

Compact Neural Representation Using Attentive Network Pruning

10 May 2020

Network parameter reduction methods have been introduced to systematically deal with the computational and memory complexity of deep networks.

NETWORK PRUNING

GPU Acceleration of Sparse Neural Networks

9 May 2020

Our results show that the activation of sparse neural networks lends very well to GPU acceleration and can help speed up machine learning strategies which generate such networks or other processes that have similar structure.

NETWORK PRUNING

Streamlining Tensor and Network Pruning in PyTorch

28 Apr 2020

In order to contrast the explosion in size of state-of-the-art machine learning models that can be attributed to the empirical advantages of over-parametrization, and due to the necessity of deploying fast, sustainable, and private on-device models on resource-constrained devices, the community has focused on techniques such as pruning, quantization, and distillation as central strategies for model compression.

MODEL COMPRESSION NETWORK PRUNING QUANTIZATION

Composition of Saliency Metrics for Channel Pruning with a Myopic Oracle

3 Apr 2020

In most cases our method finds better selections than even the best individual pruning saliency.

NETWORK PRUNING

How Not to Give a FLOP: Combining Regularization and Pruning for Efficient Inference

30 Mar 2020

The challenge of speeding up deep learning models during the deployment phase has been a large, expensive bottleneck in the modern tech industry.

NETWORK PRUNING

Data Parallelism in Training Sparse Neural Networks

25 Mar 2020

As a result, we find that the data parallelism in training sparse neural networks is no worse than that in training densely parameterized neural networks, despite the general difficulty of training sparse neural networks.

NETWORK PRUNING