Leveraging Monolingual Data for Crosslingual Compositional Word Representations

19 Dec 2014  ·  Hubert Soyer, Pontus Stenetorp, Akiko Aizawa ·

In this work, we present a novel neural network based architecture for inducing compositional crosslingual word representations. Unlike previously proposed methods, our method fulfills the following three criteria; it constrains the word-level representations to be compositional, it is capable of leveraging both bilingual and monolingual data, and it is scalable to large vocabularies and large quantities of data. The key component of our approach is what we refer to as a monolingual inclusion criterion, that exploits the observation that phrases are more closely semantically related to their sub-phrases than to other randomly sampled phrases. We evaluate our method on a well-established crosslingual document classification task and achieve results that are either comparable, or greatly improve upon previous state-of-the-art methods. Concretely, our method reaches a level of 92.7% and 84.4% accuracy for the English to German and German to English sub-tasks respectively. The former advances the state of the art by 0.9% points of accuracy, the latter is an absolute improvement upon the previous state of the art by 7.7% points of accuracy and an improvement of 33.0% in error reduction.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Cross-Lingual Document Classification Reuters RCV1/RCV2 English-to-German Biinclusion (Euro500kReuters) Accuracy 92.7 # 1
Cross-Lingual Document Classification Reuters RCV1/RCV2 German-to-English Biinclusion (Euro500kReuters) Accuracy 84.4 # 1

Methods


No methods listed for this paper. Add relevant methods here