Language Models

mBART is a sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in many languages using the BART objective. The input texts are noised by masking phrases and permuting sentences, and a single Transformer model is learned to recover the texts. Different from other pre-training approaches for machine translation, mBART pre-trains a complete autoregressive Seq2Seq model. mBART is trained once for all languages, providing a set of parameters that can be fine-tuned for any of the language pairs in both supervised and unsupervised settings, without any task-specific or language-specific modifications or initialization schemes.

Source: Multilingual Denoising Pre-training for Neural Machine Translation

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Translation 36 21.43%
Machine Translation 27 16.07%
Sentence 11 6.55%
Text Generation 8 4.76%
Denoising 8 4.76%
NMT 7 4.17%
Abstractive Text Summarization 6 3.57%
Text Summarization 5 2.98%
Language Modelling 5 2.98%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories