84 papers with code • 0 benchmarks • 1 datasets



Use these libraries to find XLM-R models and implementations


Most implemented papers

Unsupervised Cross-lingual Representation Learning at Scale

facebookresearch/XLM ACL 2020

We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale.

AdapterHub: A Framework for Adapting Transformers

Adapter-Hub/adapter-transformers EMNLP 2020

We propose AdapterHub, a framework that allows dynamic "stitching-in" of pre-trained adapters for different tasks and languages.

MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages

alexa/massive 18 Apr 2022

We present the MASSIVE dataset--Multilingual Amazon Slu resource package (SLURP) for Slot-filling, Intent classification, and Virtual assistant Evaluation.

Emotion Classification in a Resource Constrained Language Using Transformer-based Approach

sagorbrur/bangla-bert NAACL 2021

A Bengali emotion corpus consists of 6243 texts is developed for the classification task.

MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer

cambridgeltl/xcopa EMNLP 2020

The main goal behind state-of-the-art pre-trained multilingual models such as multilingual BERT and XLM-R is enabling and bootstrapping NLP applications in low-resource languages through zero-shot or few-shot cross-lingual transfer.

XGLUE: A New Benchmark Dataset for Cross-lingual Pre-training, Understanding and Generation

microsoft/Unicoder 3 Apr 2020

In this paper, we introduce XGLUE, a new benchmark dataset that can be used to train large-scale cross-lingual pre-trained models using multilingual and bilingual corpora and evaluate their performance across a diverse set of cross-lingual tasks.

BERTweet: A pre-trained language model for English Tweets

VinAIResearch/BERTweet EMNLP 2020

We present BERTweet, the first public large-scale pre-trained language model for English Tweets.

Applying Occam's Razor to Transformer-Based Dependency Parsing: What Works, What Doesn't, and What is Really Necessary

boschresearch/steps-parser 23 Oct 2020

We find that the choice of pre-trained embeddings has by far the greatest impact on parser performance and identify XLM-R as a robust choice across the languages in our study.

ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic

UBC-NLP/marbert 27 Dec 2020

To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation.

MiniLMv2: Multi-Head Self-Attention Relation Distillation for Compressing Pretrained Transformers

PaddlePaddle/PaddleNLP Findings (ACL) 2021

We generalize deep self-attention distillation in MiniLM (Wang et al., 2020) by only using self-attention relation distillation for task-agnostic compression of pretrained Transformers.