Abstractive Text Summarization
225 papers with code • 14 benchmarks • 38 datasets
Abstractive Text Summarization is the task of generating a short and concise summary that captures the salient ideas of the source text. The generated summaries potentially contain new phrases and sentences that may not appear in the source text.
Source: Generative Adversarial Network for Abstractive Text Summarization
Image credit: Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
Libraries
Use these libraries to find Abstractive Text Summarization models and implementationsDatasets
Subtasks
Latest papers
Multi-LexSum: Real-World Summaries of Civil Rights Lawsuits at Multiple Granularities
With the advent of large language models, methods for abstractive summarization have made great strides, creating potential for use in applications to aid knowledge workers processing unwieldy document collections.
Indian Legal Text Summarization: A Text Normalisation-based Approach
The authors experimented with two state-of-the-art domain-independent models for legal text summarization, namely BART and PEGASUS.
Understanding Factual Errors in Summarization: Errors, Summarizers, Datasets, Error Detectors
The propensity of abstractive summarization systems to make factual errors has been the subject of significant study, including work on models to detect factual errors and annotation of errors in current systems' outputs.
Lossless Acceleration for Seq2seq Generation with Aggressive Decoding
We study lossless acceleration for seq2seq generation with a novel decoding algorithm -- Aggressive Decoding.
FactPEGASUS: Factuality-Aware Pre-training and Fine-tuning for Abstractive Summarization
We present FactPEGASUS, an abstractive summarization model that addresses the problem of factuality during pre-training and fine-tuning: (1) We augment the sentence selection strategy of PEGASUS's (Zhang et al., 2020) pre-training objective to create pseudo-summaries that are both important and factual; (2) We introduce three complementary components for fine-tuning.
ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language Generation
In this work, we perform exhaustive experiments on both Vietnamese Abstractive Summarization and Named Entity Recognition, validating the performance of ViT5 against many other pretrained Transformer-based encoder-decoder models.
Falsesum: Generating Document-level NLI Examples for Recognizing Factual Inconsistency in Summarization
In this work, we show that NLI models can be effective for this task when the training data is augmented with high-quality task-oriented examples.
Masked Summarization to Generate Factually Inconsistent Summaries for Improved Factual Consistency Checking
To this end, the latest approach is to train a factual consistency classifier on factually consistent and inconsistent summaries.
Efficient Few-Shot Fine-Tuning for Opinion Summarization
In the same vein, we pre-train the adapters in a query-based manner on customer reviews and then fine-tune them on annotated datasets.
Two New Datasets for Italian-Language Abstractive Text Summarization
Text summarization aims to produce a short summary containing relevant parts from a given text.