About

Benchmarks

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

Latest papers with code

FAT: Learning Low-Bitwidth Parametric Representation via Frequency-Aware Transformation

15 Feb 2021ChaofanTao/FAT_Quantization

Prior arts often discretize the network weights by carefully tuning hyper-parameters of quantization (e. g. non-uniform stepsize and layer-wise bitwidths), which are complicated and sub-optimal because the full-precision and low-precision models have a large discrepancy.

NEURAL NETWORK COMPRESSION QUANTIZATION

19
15 Feb 2021

Efficient CNN-LSTM based Image Captioning using Neural Network Compression

17 Dec 2020amanmohanty/idl-nncompress

Modern Neural Networks are eminent in achieving state of the art performance on tasks under Computer Vision, Natural Language Processing and related verticals.

IMAGE CAPTIONING NEURAL NETWORK COMPRESSION QUANTIZATION

0
17 Dec 2020

Robustness and Transferability of Universal Attacks on Compressed Models

10 Dec 2020kenny-co/sgd-uap-torch

In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization.

NEURAL NETWORK COMPRESSION QUANTIZATION

7
10 Dec 2020

Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression

5 Dec 2020codestar12/Parallel-Independent-Blockwise-Distillation

The experimental results running on an AMD server with four Geforce RTX 2080Ti GPUs show that our algorithm can achieve 3x speedup plus 19% energy savings on VGG distillation, and 3. 5x speedup plus 29% energy savings on ResNet distillation, both with negligible accuracy loss.

KNOWLEDGE DISTILLATION NEURAL NETWORK COMPRESSION QUANTIZATION SPEECH RECOGNITION

4
05 Dec 2020

torchdistill: A Modular, Configuration-Driven Framework for Knowledge Distillation

25 Nov 2020yoshitomo-matsubara/torchdistill

While knowledge distillation (transfer) has been attracting attentions from the research community, the recent development in the fields has heightened the need for reproducible studies and highly generalized frameworks to lower barriers to such high-quality, reproducible deep learning research.

IMAGE CLASSIFICATION INSTANCE SEGMENTATION KNOWLEDGE DISTILLATION NEURAL NETWORK COMPRESSION

228
25 Nov 2020

Head Network Distillation: Splitting Distilled Deep Neural Networks for Resource-Constrained Edge Computing Systems

20 Nov 2020yoshitomo-matsubara/head-network-distillation

In this paper, we propose to modify the structure and training process of DNN models for complex image classification tasks to achieve in-network compression in the early network layers.

EDGE-COMPUTING IMAGE CLASSIFICATION KNOWLEDGE DISTILLATION NEURAL NETWORK COMPRESSION

7
20 Nov 2020

Additive Tree-Structured Conditional Parameter Spaces in Bayesian Optimization: A Novel Covariance Function and a Fast Implementation

6 Oct 2020maxc01/addtree

Bayesian optimization (BO) is a sample-efficient global optimization algorithm for black-box functions which are expensive to evaluate.

GLOBAL OPTIMIZATION NEURAL NETWORK COMPRESSION SMAC

5
06 Oct 2020

Neural Network Compression Using Higher-Order Statistics and AuxiliaryReconstruction Losses

15 Jun 2020chatzikon/DNN-COMPRESSION

In this paper, the problem of pruning and compressingthe weights of various layers of deep neural networks is in-vestigated.

NEURAL NETWORK COMPRESSION

7
15 Jun 2020

WoodFisher: Efficient Second-Order Approximation for Neural Network Compression

NeurIPS 2020 IST-DASLab/WoodFisher

Second-order information, in the form of Hessian- or Inverse-Hessian-vector products, is a fundamental tool for solving optimization problems.

IMAGE CLASSIFICATION NEURAL NETWORK COMPRESSION

14
29 Apr 2020

Neural network compression via learnable wavelet transforms

20 Apr 2020v0lta/wavelet-network-compression

Linear layers still occupy a significant portion of the parameters in recurrent neural networks (RNNs).

NEURAL NETWORK COMPRESSION

8
20 Apr 2020