Paper

Compressing 3DCNNs Based on Tensor Train Decomposition

Three dimensional convolutional neural networks (3DCNNs) have been applied in many tasks, e.g., video and 3D point cloud recognition. However, due to the higher dimension of convolutional kernels, the space complexity of 3DCNNs is generally larger than that of traditional two dimensional convolutional neural networks (2DCNNs). To miniaturize 3DCNNs for the deployment in confining environments such as embedded devices, neural network compression is a promising approach. In this work, we adopt the tensor train (TT) decomposition, a straightforward and simple in situ training compression method, to shrink the 3DCNN models. Through proposing tensorizing 3D convolutional kernels in TT format, we investigate how to select appropriate TT ranks for achieving higher compression ratio. We have also discussed the redundancy of 3D convolutional kernels for compression, core significance and future directions of this work, as well as the theoretical computation complexity versus practical executing time of convolution in TT. In the light of multiple contrast experiments based on VIVA challenge, UCF11, and UCF101 datasets, we conclude that TT decomposition can compress 3DCNNs by around one hundred times without significant accuracy loss, which will enable its applications in extensive real world scenarios.

Results in Papers With Code
(↓ scroll down to see all results)