Simple Recurrent Units for Highly Parallelizable Recurrence

EMNLP 2018 Tao LeiYu ZhangSida I. WangHui DaiYoav Artzi

Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability... (read more)

PDF Abstract

Evaluation results from the paper


Task Dataset Model Metric name Metric value Global rank Compare
Question Answering SQuAD1.1 SRU EM 71.4 # 99
Question Answering SQuAD1.1 SRU F1 80.2 # 98
Machine Translation WMT2014 English-German Transformer + SRU BLEU score 28.4 # 7