Zero-Shot Cross-Lingual Transfer
53 papers with code • 2 benchmarks • 4 datasets
Libraries
Use these libraries to find Zero-Shot Cross-Lingual Transfer models and implementationsMost implemented papers
Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond
We introduce an architecture to learn joint multilingual sentence representations for 93 languages, belonging to more than 30 different families and written in 28 different scripts.
XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
In this paper, we introduce ELECTRA-style tasks to cross-lingual language model pre-training.
Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT
Pretrained contextual representation models (Peters et al., 2018; Devlin et al., 2018) have pushed forward the state-of-the-art on many NLP tasks.
Simple and Effective Zero-shot Cross-lingual Phoneme Recognition
Recent progress in self-training, self-supervised pretraining and unsupervised learning enabled well performing speech recognition systems without any labeled data.
Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word Vector Specialization
Our adversarial post-specialization method propagates the external lexical knowledge to the full distributional space.
Cross-Lingual BERT Transformation for Zero-Shot Dependency Parsing
In this approach, a linear transformation is learned from contextual word alignments to align the contextualized embeddings independently trained in different languages.
Cross-Lingual Natural Language Generation via Pre-Training
In this work we focus on transferring supervision signals of natural language generation (NLG) tasks between multiple languages.
Parameter Space Factorization for Zero-Shot Learning across Tasks and Languages
In this work, we propose a Bayesian generative model for the space of neural parameters.
Zero-Shot Cross-Lingual Transfer with Meta Learning
We show that this challenging setup can be approached using meta-learning, where, in addition to training a source language model, another model learns to select which training instances are the most beneficial to the first.
On the Limitations of Cross-lingual Encoders as Exposed by Reference-Free Machine Translation Evaluation
We systematically investigate a range of metrics based on state-of-the-art cross-lingual semantic representations obtained with pretrained M-BERT and LASER.