Simple Recurrent Units for Highly Parallelizable Recurrence

EMNLP 2018  ·  Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi ·

Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability. SRU is designed to provide expressive recurrence, enable highly parallelized implementation, and comes with careful initialization to facilitate training of deep models. We demonstrate the effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over cuDNN-optimized LSTM on classification and question answering datasets, and delivers stronger results than LSTM and convolutional models. We also obtain an average of 0.7 BLEU improvement over the Transformer model on translation by incorporating SRU into the architecture.

PDF Abstract EMNLP 2018 PDF EMNLP 2018 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering SQuAD1.1 SRU EM 71.4 # 150
F1 80.2 # 153
Hardware Burden 4G # 1
Operations per network pass None # 1
Question Answering SQuAD1.1 dev SRU EM 71.4 # 32
F1 80.2 # 35
Machine Translation WMT2014 English-German Transformer + SRU BLEU score 28.4 # 44
Hardware Burden 34G # 1
Operations per network pass None # 1

Methods