35 papers with code • 0 benchmarks • 3 datasets
These leaderboards are used to track progress in Word Translation
LibrariesUse these libraries to find Word Translation models and implementations
Continuous word representations learned separately on distinct languages can be aligned so that their words become comparable in a common space.
From these correspondences a cross-lingual representation is created that enables the transfer of classification knowledge from the source to the target language.
Our approach decouples learning the transformation from the source language to the target language into (a) learning rotations for language-specific embeddings to align them to a common space, and (b) learning a similarity metric in the common space to model similarities between the embeddings.
Recent advances in cross-lingual word embeddings have primarily relied on mapping-based methods, which project pretrained word embeddings from different languages into a shared space through a linear transformation.
In contrast, we propose an unsupervised and a very resource-light approach for measuring semantic similarity between texts in different languages.