Multilingual Word Embeddings
18 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Multilingual Word Embeddings
Most implemented papers
Unsupervised Multilingual Word Embeddings
Multilingual Word Embeddings (MWEs) represent words from multiple languages in a single distributional vector space.
SimAlign: High Quality Word Alignments without Parallel Training Data using Static and Contextualized Embeddings
We find that alignments created from embeddings are superior for four and comparable for two language pairs compared to those produced by traditional statistical aligners, even with abundant parallel data; e. g., contextualized embeddings achieve a word alignment F1 for English-German that is 5 percentage points higher than eflomal, a high-quality statistical aligner, trained on 100k parallel sentences.
ConceptNet at SemEval-2017 Task 2: Extending Word Embeddings with Multilingual Relational Knowledge
This paper describes Luminoso's participation in SemEval 2017 Task 2, "Multilingual and Cross-lingual Semantic Word Similarity", with a system based on ConceptNet.
Learning Multilingual Word Embeddings in Latent Metric Space: A Geometric Approach
Our approach decouples learning the transformation from the source language to the target language into (a) learning rotations for language-specific embeddings to align them to a common space, and (b) learning a similarity metric in the common space to model similarities between the embeddings.
Massively Multilingual Word Embeddings
We introduce new methods for estimating and evaluating embeddings of words in more than fifty languages in a single shared embedding space.
ALL-IN-1: Short Text Classification with One Model for All Languages
We present ALL-IN-1, a simple model for multilingual text classification that does not require any parallel data.
Cross-lingual Lexical Sememe Prediction
We propose a novel framework to model correlations between sememes and multi-lingual words in low-dimensional semantic space for sememe prediction.
Aligning Multilingual Word Embeddings for Cross-Modal Retrieval Task
In this paper, we propose a new approach to learn multimodal multilingual embeddings for matching images and their relevant captions in two languages.