Transformers

BART is a denoising autoencoder for pretraining sequence-to-sequence models. It is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Transformer-based neural machine translation architecture. It uses a standard seq2seq/NMT architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT). This means the encoder's attention mask is fully visible, like BERT, and the decoder's attention mask is causal, like GPT2.

Source: BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
RAG 468 30.08%
Retrieval 342 21.98%
Question Answering 104 6.68%
Language Modelling 52 3.34%
Large Language Model 44 2.83%
Language Modeling 42 2.70%
Information Retrieval 39 2.51%
Text Generation 26 1.67%
Prompt Engineering 18 1.16%

Categories