Tag-less Back-Translation

22 Dec 2019  ·  Idris Abdulmumin, Bashir Shehu Galadanci, Aliyu Garba ·

An effective method to generate a large number of parallel sentences for training improved neural machine translation (NMT) systems is the use of the back-translations of the target-side monolingual data. The standard back-translation method has been shown to be unable to efficiently utilize the available huge amount of existing monolingual data because of the inability of translation models to differentiate between the authentic and synthetic parallel data during training. Tagging, or using gates, has been used to enable translation models to distinguish between synthetic and authentic data, improving standard back-translation and also enabling the use of iterative back-translation on language pairs that underperformed using standard back-translation. In this work, we approach back-translation as a domain adaptation problem, eliminating the need for explicit tagging. In the approach -- \emph{tag-less back-translation} -- the synthetic and authentic parallel data are treated as out-of-domain and in-domain data respectively and, through pre-training and fine-tuning, the translation model is shown to be able to learn more efficiently from them during training. Experimental results have shown that the approach outperforms the standard and tagged back-translation approaches on low resource English-Vietnamese and English-German neural machine translation.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


Ranked #32 on Machine Translation on IWSLT2014 German-English (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Machine Translation IWSLT2014 German-English Back-Translation Finetuning BLEU score 28.83 # 32

Methods


No methods listed for this paper. Add relevant methods here