Browse > Methodology > Model Compression

Model Compression

21 papers with code · Methodology

State-of-the-art leaderboards

You can find evaluation results in the subtasks. You can also submitting evaluation metrics for this task.

Latest papers with code

COP: Customized Deep Model Compression via Regularized Correlation-Based Filter-Level Pruning

25 Jun 2019ZJULearning/COP

2) Cross-layer filter comparison is unachievable since the importance is defined locally within each layer.

NEURAL NETWORK COMPRESSION

2
25 Jun 2019

Shakeout: A New Approach to Regularized Deep Neural Network Training

13 Apr 2019kgl-prml/shakeout-for-caffe

Dropout has played an essential role in many successful deep neural networks, by inducing regularization in the model training.

MODEL COMPRESSION

1
13 Apr 2019

The State of Sparsity in Deep Neural Networks

25 Feb 2019google-research/google-research

We rigorously evaluate three state-of-the-art techniques for inducing sparsity in deep neural networks on two large-scale learning tasks: Transformer trained on WMT 2014 English-to-German, and ResNet-50 trained on ImageNet.

MODEL COMPRESSION

2,123
25 Feb 2019

Information-Theoretic Understanding of Population Risk Improvement with Model Compression

27 Jan 2019wgao9/weight_quant

We show that model compression can improve the population risk of a pre-trained model, by studying the tradeoff between the decrease in the generalization error and the increase in the empirical risk with model compression.

MODEL COMPRESSION

0
27 Jan 2019

Exploiting Kernel Sparsity and Entropy for Interpretable CNN Compression

CVPR 2019 yuchaoli/KSE

The relationship between the input feature maps and 2D kernels is revealed in a theoretical framework, based on which a kernel sparsity and entropy (KSE) indicator is proposed to quantitate the feature map importance in a feature-agnostic manner to guide model compression.

MODEL COMPRESSION

11
11 Dec 2018

Teacher-Student Compression with Generative Adversarial Networks

ICLR 2019 RuishanLiu/GAN-TSC

Our GAN-assisted TSC (GAN-TSC) significantly improves student accuracy for expensive models such as large random forests and deep neural networks on both tabular and image datasets.

IMAGE CLASSIFICATION MODEL COMPRESSION

4
05 Dec 2018

JavaScript Convolutional Neural Networks for Keyword Spotting in the Browser: An Experimental Analysis

30 Oct 2018castorini/honkling

Overall, our robust, cross-device implementation for keyword spotting realizes a new paradigm for serving neural network applications, and one of our slim models reduces latency by 66% with a minimal decrease in accuracy of 4% from 94% to 90%.

KEYWORD SPOTTING MODEL COMPRESSION

9
30 Oct 2018

Discrimination-aware Channel Pruning for Deep Neural Networks

NeurIPS 2018 SCUT-AILab/DCP

Channel pruning is one of the predominant approaches for deep model compression.

MODEL COMPRESSION

72
28 Oct 2018

Dynamic Channel Pruning: Feature Boosting and Suppression

ICLR 2019 deep-fry/mayo

Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources.

MODEL COMPRESSION

36
12 Oct 2018

On-Device Neural Language Model Based Word Prediction

COLING 2018 meinwerk/WordPrediction

Recent developments in deep learning with application to language modeling have led to success in tasks of text processing, summarizing and machine translation.

LANGUAGE MODELLING MACHINE TRANSLATION MODEL COMPRESSION NETWORK PRUNING SPEECH RECOGNITION

16
01 Aug 2018