mBART is a sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in many languages using the BART objective. The input texts are noised by masking phrases and permuting sentences, and a single Transformer model is learned to recover the texts. Different from other pre-training approaches for machine translation, mBART pre-trains a complete autoregressive Seq2Seq model. mBART is trained once for all languages, providing a set of parameters that can be fine-tuned for any of the language pairs in both supervised and unsupervised settings, without any task-specific or language-specific modifications or initialization schemes.
Source: Multilingual Denoising Pre-training for Neural Machine TranslationPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Translation | 38 | 19.29% |
Machine Translation | 28 | 14.21% |
Sentence | 13 | 6.60% |
NMT | 8 | 4.06% |
Text Generation | 8 | 4.06% |
Decoder | 8 | 4.06% |
Denoising | 8 | 4.06% |
Abstractive Text Summarization | 7 | 3.55% |
Text Summarization | 6 | 3.05% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |