Cross-Lingual Word Embeddings

31 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Cross-Lingual Word Embeddings models and implementations
2 papers
640

Most implemented papers

Word Translation Without Parallel Data

facebookresearch/MUSE ICLR 2018

We finally describe experiments on the English-Esperanto low-resource language pair, on which there only exists a limited amount of parallel data, to show the potential impact of our method in fully unsupervised machine translation.

A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings

artetxem/vecmap ACL 2018

Recent work has managed to learn cross-lingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training.

Lost in Evaluation: Misleading Benchmarks for Bilingual Dictionary Induction

coastalcph/MUSE_dicos IJCNLP 2019

We study the composition and quality of the test sets for five diverse languages from this dataset, with concerning findings: (1) a quarter of the data consists of proper nouns, which can be hardly indicative of BDI performance, and (2) there are pervasive gaps in the gold-standard targets.

A Pilot Study for Chinese SQL Semantic Parsing

taolusi/chisp IJCNLP 2019

The task of semantic parsing is highly useful for dialogue and question answering systems.

Robust Cross-lingual Embeddings from Parallel Sentences

epfml/Bi-Sent2Vec 28 Dec 2019

Recent advances in cross-lingual word embeddings have primarily relied on mapping-based methods, which project pretrained word embeddings from different languages into a shared space through a linear transformation.

Baselines and test data for cross-lingual inference

nlpitu/xnli LREC 2018

In this paper, we propose to advance the research in SNLI-style natural language inference toward multilingual evaluation.

Model Transfer for Tagging Low-resource Languages using a Bilingual Dictionary

mengf1/trpos ACL 2017

Cross-lingual model transfer is a compelling and popular method for predicting annotations in a low-resource language, whereby parallel corpora provide a bridge to a high-resource language and its associated annotated corpora.

Improving Cross-Lingual Word Embeddings by Meeting in the Middle

yeraidm/meemi EMNLP 2018

Cross-lingual word embeddings are becoming increasingly important in multilingual NLP.

Are Girls Neko or Shōjo? Cross-Lingual Alignment of Non-Isomorphic Embeddings with Iterative Normalization

zhangmozhi/iternorm 4 Jun 2019

Cross-lingual word embeddings (CLWE) underlie many multilingual natural language processing systems, often through orthogonal transformations of pre-trained monolingual embeddings.