PEGASUS proposes a transformer-based model for abstractive summarization. It uses a special self-supervised pre-training objective called gap-sentences generation (GSG) that's designed to perform well on summarization-related downstream tasks. As reported in the paper, "both GSG and MLM are applied simultaneously to this example as pre-training objectives. Originally there are three sentences. One sentence is masked with [MASK1] and used as target generation text (GSG). The other two sentences remain in the input, but some tokens are randomly masked by [MASK2]."
Source: PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive SummarizationPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Abstractive Text Summarization | 15 | 31.25% |
Text Summarization | 8 | 16.67% |
Text Generation | 3 | 6.25% |
Document Summarization | 3 | 6.25% |
Active Learning | 3 | 6.25% |
Multi-Document Summarization | 2 | 4.17% |
Decision Making | 1 | 2.08% |
Long-range modeling | 1 | 2.08% |
Table-to-Text Generation | 1 | 2.08% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |