Training Tips for the Transformer Model

1 Apr 2018Martin Popel • Ondřej Bojar

This article describes our experiments in neural machine translation using the recent Tensor2Tensor framework and the Transformer sequence-to-sequence model (Vaswani et al., 2017). We examine some of the critical parameters that affect the final translation quality, memory usage, training stability and training time, concluding each experiment with a set of recommendations for fellow researchers. In addition to confirming the general mantra "more data and larger models", we address scaling to multiple GPUs and provide practical tips for improved training regarding batch size, learning rate, warmup steps, maximum sentence length and checkpoint averaging.

Full paper

Evaluation


No evaluation results yet. Help compare this paper to other papers by submitting the tasks and evaluation metrics from the paper.