Multilingual Distributed Representations without Word Alignment

20 Dec 2013  ·  Karl Moritz Hermann, Phil Blunsom ·

Distributed representations of meaning are a natural way to encode covariance relationships between words and phrases in NLP. By overcoming data sparsity problems, as well as providing information about semantic relatedness which is not available in discrete representations, distributed representations have proven useful in many NLP tasks. Recent work has shown how compositional semantic representations can successfully be applied to a number of monolingual applications such as sentiment analysis. At the same time, there has been some initial success in work on learning shared word-level representations across languages. We combine these two approaches by proposing a method for learning distributed representations in a multilingual setup. Our model learns to assign similar embeddings to aligned sentences and dissimilar ones to sentence which are not aligned while not requiring word alignments. We show that our representations are semantically informative and apply them to a cross-lingual document classification task where we outperform the previous state of the art. Further, by employing parallel corpora of multiple language pairs we find that our model learns representations that capture semantic relationships across languages for which no parallel data was used.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Cross-Lingual Document Classification Reuters RCV1/RCV2 English-to-German biCVM+ Accuracy 86.2 # 3
Cross-Lingual Document Classification Reuters RCV1/RCV2 German-to-English biCVM+ Accuracy 76.9 # 3

Methods


No methods listed for this paper. Add relevant methods here