Word Translation Without Parallel Data

ICLR 2018 Alexis Conneau • Guillaume Lample • Marc'Aurelio Ranzato • Ludovic Denoyer • Hervé Jégou

State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora. Without using any character information, our model even outperforms existing supervised methods on cross-lingual tasks for some language pairs. We finally describe experiments on the English-Esperanto low-resource language pair, on which there only exists a limited amount of parallel data, to show the potential impact of our method in fully unsupervised machine translation.

Full paper

Evaluation


No evaluation results yet. Help compare this paper to other papers by submitting the tasks and evaluation metrics from the paper.