About

Benchmarks

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

Latest papers without code

Data-Free Knowledge Distillation with Soft Targeted Transfer Set Synthesis

10 Apr 2021

Knowledge distillation (KD) has proved to be an effective approach for deep neural network compression, which learns a compact network (student) by transferring the knowledge from a pre-trained, over-parameterized network (teacher).

KNOWLEDGE DISTILLATION NEURAL NETWORK COMPRESSION

Spectral Tensor Train Parameterization of Deep Learning Layers

7 Mar 2021

We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.

IMAGE CLASSIFICATION IMAGE GENERATION NEURAL NETWORK COMPRESSION

Neural Network Compression for Noisy Storage Devices

15 Feb 2021

We propose a radically different approach that: (i) employs analog memories to maximize the capacity of each memory cell, and (ii) jointly optimizes model compression and physical storage to maximize memory utility.

NEURAL NETWORK COMPRESSION

Scaling Up Exact Neural Network Compression by ReLU Stability

15 Feb 2021

We can compress a neural network while exactly preserving its underlying functionality with respect to a given input domain if some of its neurons are stable.

NEURAL NETWORK COMPRESSION

Sparse matrix products for neural network compression

1 Jan 2021

Over-parameterization of neural networks is a well known issue that comes along with their great performance.

NEURAL NETWORK COMPRESSION

Multi-head Knowledge Distillation for Model Compression

5 Dec 2020

We add loss terms for training the student that measure the dissimilarity between student and teacher outputs of the auxiliary classifiers.

IMAGE CLASSIFICATION KNOWLEDGE DISTILLATION NEURAL NETWORK COMPRESSION

Neural Network Compression Via Sparse Optimization

10 Nov 2020

The compression of deep neural networks (DNNs) to reduce inference cost becomes increasingly important to meet realistic deployment requirements of various applications.

NEURAL NETWORK COMPRESSION STOCHASTIC OPTIMIZATION

Dirichlet Pruning for Neural Network Compression

10 Nov 2020

We introduce Dirichlet pruning, a novel post-processing technique to transform a large neural network model into a compressed one.

NEURAL NETWORK COMPRESSION VARIATIONAL INFERENCE

A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions

5 Oct 2020

In this paper, we present a comprehensive review of existing literature on compressing DNN model that reduces both storage and computation requirements.

KNOWLEDGE DISTILLATION NETWORK PRUNING NEURAL NETWORK COMPRESSION