LIT: Learned Intermediate Representation Training for Model Compression

4 Sep 2019  ·  Animesh Koratana, Daniel Kang, Peter Bailis, Matei Zaharia ·

Researchers have proposed a range of model compression techniques to reduce the computational and memory footprint of deep neural networks (DNNs). In this work, we introduce Learned Intermediate representation Training (LIT), a novel model compression technique that outperforms a range of recent model compression techniques by leveraging the highly repetitive structure of modern DNNs (e.g., ResNet). LIT uses a teacher DNN to train a student DNN of reduced depth by leveraging two key ideas: 1) LIT directly compares intermediate representations of the teacher and student model and 2) LIT uses the intermediate representation from the teacher model’s previous block as input to the current student block during training, improving stability of intermediate representations in the student network. We show that LIT can substantially reduce network size without loss in accuracy on a range of DNN architectures and datasets. For example, LIT can compress ResNet on CIFAR10 by 3.4× outperforming network slimming and FitNets. Furthermore, LIT can compress, by depth, ResNeXt 5.5× on CIFAR10 (image classification), VDCNN by 1.7× on Amazon Reviews (sentiment analysis), and StarGAN by 1.8× on CelebA (style transfer, i.e., GANs).

PDF

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here