Model Pruning Based on Quantified Similarity of Feature Maps

13 May 2021  ·  Zidu Wang, Xuexin Liu, Long Huang, Yunqing Chen, Yufei Zhang, Zhikang Lin, Rui Wang ·

Convolutional Neural Networks (CNNs) has been applied in numerous Internet of Things (IoT) devices for multifarious downstream tasks. However, with the increasing amount of data on edge devices, CNNs can hardly complete some tasks in time with limited computing and storage resources. Recently, filter pruning has been regarded as an effective technique to compress and accelerate CNNs, but existing methods rarely prune CNNs from the perspective of compressing high-dimensional tensors. In this paper, we propose a novel theory to find redundant information in three-dimensional tensors, namely Quantified Similarity between Feature Maps (QSFM), and utilize this theory to guide the filter pruning procedure. We perform QSFM on datasets (CIFAR-10, CIFAR-100 and ILSVRC-12) and edge devices, demonstrate that the proposed method can find the redundant information in the neural networks effectively with comparable compression and tolerable drop of accuracy. Without any fine-tuning operation, QSFM can compress ResNet-56 on CIFAR-10 significantly (48.7% FLOPs and 57.9% parameters are reduced) with only a loss of 0.54% in the top-1 accuracy. For the practical application of edge devices, QSFM can accelerate MobileNet-V2 inference speed by 1.53 times with only a loss of 1.23% in the ILSVRC-12 top-1 accuracy.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods