XLM-R

99 papers with code • 0 benchmarks • 1 datasets

XLM-R

Libraries

Use these libraries to find XLM-R models and implementations

Datasets


Most implemented papers

Unsupervised Cross-lingual Representation Learning at Scale

facebookresearch/XLM ACL 2020

We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale.

AdapterHub: A Framework for Adapting Transformers

Adapter-Hub/adapter-transformers EMNLP 2020

We propose AdapterHub, a framework that allows dynamic "stitching-in" of pre-trained adapters for different tasks and languages.

MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages

alexa/massive 18 Apr 2022

We present the MASSIVE dataset--Multilingual Amazon Slu resource package (SLURP) for Slot-filling, Intent classification, and Virtual assistant Evaluation.

Emotion Classification in a Resource Constrained Language Using Transformer-based Approach

sagorbrur/bangla-bert NAACL 2021

A Bengali emotion corpus consists of 6243 texts is developed for the classification task.

MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer

cambridgeltl/xcopa EMNLP 2020

The main goal behind state-of-the-art pre-trained multilingual models such as multilingual BERT and XLM-R is enabling and bootstrapping NLP applications in low-resource languages through zero-shot or few-shot cross-lingual transfer.

BERTweet: A pre-trained language model for English Tweets

VinAIResearch/BERTweet EMNLP 2020

We present BERTweet, the first public large-scale pre-trained language model for English Tweets.

DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing

microsoft/DeBERTa 18 Nov 2021

We thus propose a new gradient-disentangled embedding sharing method that avoids the tug-of-war dynamics, improving both training efficiency and the quality of the pre-trained model.

XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models

facebook/xlm-v-base 25 Jan 2023

Large multilingual language models typically rely on a single vocabulary shared across 100+ languages.

XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation

microsoft/Unicoder 3 Apr 2020

In this paper, we introduce XGLUE, a new benchmark dataset that can be used to train large-scale cross-lingual pre-trained models using multilingual and bilingual corpora and evaluate their performance across a diverse set of cross-lingual tasks.

A Bayesian Multilingual Document Model for Zero-shot Topic Identification and Discovery

BUTSpeechFIT/BaySMM 2 Jul 2020

In this paper, we present a Bayesian multilingual document model for learning language-independent document embeddings.