Quantisation and Pruning for Neural Network Compression and Regularisation
Deep neural networks are typically too computationally expensive to run in real-time on consumer-grade hardware and low-powered devices. In this paper, we investigate reducing the computational and memory requirements of neural networks through network pruning and quantisation. We examine their efficacy on large networks like AlexNet compared to recent compact architectures: ShuffleNet and MobileNet. Our results show that pruning and quantisation compresses these networks to less than half their original size and improves their efficiency, particularly on MobileNet with a 7x speedup. We also demonstrate that pruning, in addition to reducing the number of parameters in a network, can aid in the correction of overfitting.
PDF AbstractDatasets
Results from the Paper
Ranked #1 on
Network Pruning
on CIFAR-10
(Inference Time (ms) metric)
Task | Dataset | Model | Metric Name | Metric Value | Global Rank | Benchmark |
---|---|---|---|---|---|---|
Network Pruning | CIFAR-10 | MobileNet – Quantised | Inference Time (ms) | 4.74 | # 1 | |
Neural Network Compression | CIFAR-10 | ShuffleNet – Quantised | Size (MB) | 1.9 | # 1 | |
Network Pruning | CIFAR-10 | AlexNet – Quantised | Inference Time (ms) | 5.23 | # 2 | |
Network Pruning | CIFAR-10 | ShuffleNet – Quantised | Inference Time (ms) | 23.15 | # 3 | |
Neural Network Compression | CIFAR-10 | AlexNet – Quantised | Size (MB) | 54.6 | # 3 | |
Neural Network Compression | CIFAR-10 | MobileNet – Quantised | Size (MB) | 2.9 | # 2 |