Exploiting Monolingual Data at Scale for Neural Machine Translation

While target-side monolingual data has been proven to be very useful to improve neural machine translation (briefly, NMT) through back translation, source-side monolingual data is not well investigated. In this work, we study how to use both the source-side and target-side monolingual data for NMT, and propose an effective strategy leveraging both of them. First, we generate synthetic bitext by translating monolingual data from the two domains into the other domain using the models pretrained on genuine bitext. Next, a model is trained on a noised version of the concatenated synthetic bitext where each source sequence is randomly corrupted. Finally, the model is fine-tuned on the genuine bitext and a clean version of a subset of the synthetic bitext without adding any noise. Our approach achieves state-of-the-art results on WMT16, WMT17, WMT18 English$\leftrightarrow$German translations and WMT19 German$\to$French translations, which demonstrate the effectiveness of our method. We also conduct a comprehensive study on how each part in the pipeline works.

PDF Abstract

Datasets


Results from the Paper


 Ranked #1 on Machine Translation on WMT2016 English-German (SacreBLEU metric, using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Machine Translation WMT2016 English-German Exploiting Mono at Scale (single) SacreBLEU 40.9 # 1
Machine Translation WMT2016 German-English Exploiting Mono at Scale (single) SacreBLEU 47.5 # 1
Machine Translation WMT2019 English-German Exploiting Mono at Scale (single) SacreBLEU 43.8 # 1
Machine Translation WMT2019 German-English Exploiting Mono at Scale (single) SacreBLEU 41.9 # 1

Methods


No methods listed for this paper. Add relevant methods here