Scaling Neural Machine Translation

Sequence to sequence learning models still require several days to reach state of the art performance on large benchmark datasets using a single machine. This paper shows that reduced precision and large batch training can speedup training by nearly 5x on a single 8-GPU machine with careful tuning and implementation. On WMT'14 English-German translation, we match the accuracy of Vaswani et al. (2017) in under 5 hours when training on 8 GPUs and we obtain a new state of the art of 29.3 BLEU after training for 85 minutes on 128 GPUs. We further improve these results to 29.8 BLEU by training on the much larger Paracrawl dataset. On the WMT'14 English-French task, we obtain a state-of-the-art BLEU of 43.2 in 8.5 hours on 128 GPUs.

PDF Abstract WS 2018 PDF WS 2018 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Machine Translation WMT2014 English-French Transformer Big BLEU score 43.2 # 12
Hardware Burden 55G # 1
Operations per network pass None # 1
Machine Translation WMT2014 English-German Transformer Big BLEU score 29.3 # 25
Number of Params 210M # 7
Hardware Burden 9G # 1
Operations per network pass None # 1

Methods


No methods listed for this paper. Add relevant methods here