Abstractive Text Summarization
325 papers with code • 19 benchmarks • 48 datasets
Abstractive Text Summarization is the task of generating a short and concise summary that captures the salient ideas of the source text. The generated summaries potentially contain new phrases and sentences that may not appear in the source text.
Source: Generative Adversarial Network for Abstractive Text Summarization
Image credit: Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
Libraries
Use these libraries to find Abstractive Text Summarization models and implementationsDatasets
Subtasks
Latest papers
Mitigating Hallucination in Abstractive Summarization with Domain-Conditional Mutual Information
We hypothesize that the domain (or topic) of the source text triggers the model to generate text that is highly probable in the domain, neglecting the details of the source text.
ACLSum: A New Dataset for Aspect-based Summarization of Scientific Publications
Extensive efforts in the past have been directed toward the development of summarization datasets.
Semi-Supervised Dialogue Abstractive Summarization via High-Quality Pseudolabel Selection
Semi-supervised dialogue summarization (SSDS) leverages model-generated summaries to reduce reliance on human-labeled data and improve the performance of summarization models.
Improving Factual Error Correction for Abstractive Summarization via Data Distillation and Conditional-generation Cloze
Improving factual consistency in abstractive summarization has been a focus of current research.
Source Identification in Abstractive Summarization
Neural abstractive summarization models make summaries in an end-to-end manner, and little is known about how the source information is actually converted into summaries.
LOCOST: State-Space Models for Long Document Abstractive Summarization
State-space models are a low-complexity alternative to transformers for encoding long sequences and capturing long-term dependencies.
MedTSS: transforming abstractive summarization of scientific articles with linguistic analysis and concept reinforcement
This research addresses the limitations of pretrained models (PTMs) in generating accurate and comprehensive abstractive summaries for scientific articles, with a specific focus on the challenges posed by medical research.
Revisiting Zero-Shot Abstractive Summarization in the Era of Large Language Models from the Perspective of Position Bias
We characterize and study zero-shot abstractive summarization in Large Language Models (LLMs) by measuring position bias, which we propose as a general formulation of the more restrictive lead bias phenomenon studied previously in the literature.
ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
With our design, FP6 can become a promising solution to the current 4-bit quantization methods used in LLMs.
FREDSum: A Dialogue Summarization Corpus for French Political Debates
In this paper, we present a dataset of French political debates for the purpose of enhancing resources for multi-lingual dialogue summarization.