Browse > Methodology > Model Compression

Model Compression

27 papers with code · Methodology

State-of-the-art leaderboards

You can find evaluation results in the subtasks. You can also submitting evaluation metrics for this task.

Latest papers with code

Einconv: Exploring Unexplored Tensor Decompositions for Convolutional Neural Networks

13 Aug 2019pfnet-research/einconv

This raises the simple question of how many decompositions are possible, and which of these is the best.

MODEL COMPRESSION NEURAL ARCHITECTURE SEARCH

11
13 Aug 2019

Light Multi-segment Activation for Model Compression

16 Jul 2019LMA-NeurIPS19/LMA

Inspired by the nature of the expressiveness ability in Neural Networks, we propose to use multi-segment activation, which can significantly improve the expressiveness ability with very little cost, in the compact student model.

MODEL COMPRESSION QUANTIZATION

1
16 Jul 2019

COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level Pruning

25 Jun 2019ZJULearning/COP

2) Cross-layer filter comparison is unachievable since the importance is defined locally within each layer.

NEURAL NETWORK COMPRESSION

24
25 Jun 2019

Shakeout: A New Approach to Regularized Deep Neural Network Training

13 Apr 2019kgl-prml/shakeout-for-caffe

Dropout has played an essential role in many successful deep neural networks, by inducing regularization in the model training.

MODEL COMPRESSION

1
13 Apr 2019

Adversarial Robustness vs Model Compression, or Both?

29 Mar 2019yeshaokai/Robustness-Aware-Pruning-ADMM

Furthermore, this work studies two hypotheses about weight pruning in the conventional setting and finds that weight pruning is essential for reducing the network model size in the adversarial setting, training a small model from scratch even with inherited initialization from the large model cannot achieve both adversarial robustness and high standard accuracy.

MODEL COMPRESSION NETWORK PRUNING

16
29 Mar 2019

The State of Sparsity in Deep Neural Networks

25 Feb 2019google-research/google-research

We rigorously evaluate three state-of-the-art techniques for inducing sparsity in deep neural networks on two large-scale learning tasks: Transformer trained on WMT 2014 English-to-German, and ResNet-50 trained on ImageNet.

MODEL COMPRESSION

3,408
25 Feb 2019

Information-Theoretic Understanding of Population Risk Improvement with Model Compression

27 Jan 2019wgao9/weight_quant

We show that model compression can improve the population risk of a pre-trained model, by studying the tradeoff between the decrease in the generalization error and the increase in the empirical risk with model compression.

MODEL COMPRESSION

0
27 Jan 2019

Exploiting Kernel Sparsity and Entropy for Interpretable CNN Compression

CVPR 2019 yuchaoli/KSE

The relationship between the input feature maps and 2D kernels is revealed in a theoretical framework, based on which a kernel sparsity and entropy (KSE) indicator is proposed to quantitate the feature map importance in a feature-agnostic manner to guide model compression.

MODEL COMPRESSION

26
11 Dec 2018

Teacher-Student Compression with Generative Adversarial Networks

ICLR 2019 RuishanLiu/GAN-MC

Our GAN-assisted TSC (GAN-TSC) significantly improves student accuracy for expensive models such as large random forests and deep neural networks on both tabular and image datasets.

IMAGE CLASSIFICATION MODEL COMPRESSION

6
05 Dec 2018

JavaScript Convolutional Neural Networks for Keyword Spotting in the Browser: An Experimental Analysis

30 Oct 2018castorini/honkling

Overall, our robust, cross-device implementation for keyword spotting realizes a new paradigm for serving neural network applications, and one of our slim models reduces latency by 66% with a minimal decrease in accuracy of 4% from 94% to 90%.

KEYWORD SPOTTING MODEL COMPRESSION

11
30 Oct 2018