Unsupervised Neural Machine Translation

In spite of the recent success of neural machine translation (NMT) in standard benchmarks, the lack of large parallel corpora poses a major practical problem for many language pairs. There have been several proposals to alleviate this issue with, for instance, triangulation and semi-supervised learning techniques, but they still require a strong cross-lingual signal. In this work, we completely remove the need of parallel data and propose a novel method to train an NMT system in a completely unsupervised manner, relying on nothing but monolingual corpora. Our model builds upon the recent work on unsupervised embedding mappings, and consists of a slightly modified attentional encoder-decoder model that can be trained on monolingual corpora alone using a combination of denoising and backtranslation. Despite the simplicity of the approach, our system obtains 15.56 and 10.21 BLEU points in WMT 2014 French-to-English and German-to-English translation. The model can also profit from small parallel corpora, and attains 21.81 and 15.24 points when combined with 100,000 parallel sentences, respectively. Our implementation is released as an open source project.

PDF Abstract ICLR 2018 PDF ICLR 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Machine Translation WMT2014 English-French Unsupervised attentional encoder-decoder + BPE BLEU score 14.36 # 56
Hardware Burden None # 1
Operations per network pass None # 1
Machine Translation WMT2015 English-German Unsupervised attentional encoder-decoder + BPE BLEU score 6.89 # 6

Methods


No methods listed for this paper. Add relevant methods here