Learning Intrinsic Sparse Structures within Long Short-Term Memory

ICLR 2018 Wei WenYuxiong HeSamyam RajbhandariMinjia ZhangWenhan WangFang LiuBin HuYiran ChenHai Li

Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs... (read more)

PDF Abstract

Evaluation results from the paper

  Submit results from this paper to get state-of-the-art GitHub badges and help community compare results to other papers.