DeepSZ: A Novel Framework to Compress Deep Neural Networks by Using Error-Bounded Lossy Compression

26 Jan 2019 Sian Jin Sheng Di Xin Liang Jiannan Tian Dingwen Tao Franck Cappello

DNNs have been quickly and broadly exploited to improve the data analysis quality in many complex science and engineering applications. Today's DNNs are becoming deeper and wider because of increasing demand on the analysis quality and more and more complex applications to resolve... (read more)

PDF Abstract

Datasets


Results from the Paper


 Ranked #1 on Neural Network Compression on ImageNet (using extra training data)

     Get a GitHub badge
TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK USES EXTRA
TRAINING DATA
RESULT BENCHMARK
Neural Network Compression ImageNet ImageNet All 0.1 # 1

Methods used in the Paper


METHOD TYPE
1x1 Convolution
Convolutions
Convolution
Convolutions
Local Response Normalization
Normalization
Grouped Convolution
Convolutions
ReLU
Activation Functions
Dropout
Regularization
Dense Connections
Feedforward Networks
Max Pooling
Pooling Operations
Softmax
Output Functions
AlexNet
Convolutional Neural Networks