Simple Recurrent Units for Highly Parallelizable Recurrence

EMNLP 2018 Tao Lei • Yu Zhang • Sida I. Wang • Hui Dai • Yoav Artzi

Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability. SRU is designed to provide expressive recurrence, enable highly parallelized implementation, and comes with careful initialization to facilitate training of deep models.

Full paper

Evaluation


Task Dataset Model Metric name Metric value Global rank Compare
Question Answering SQuAD1.1 SRU EM 71.4 # 112
Question Answering SQuAD1.1 SRU F1 80.2 # 112
Machine Translation WMT2014 English-German Transformer + SRU BLEU score 28.4 # 13