Abstractive Text Summarization
353 papers with code • 18 benchmarks • 51 datasets
Abstractive Text Summarization is the task of generating a short and concise summary that captures the salient ideas of the source text. The generated summaries potentially contain new phrases and sentences that may not appear in the source text.
Source: Generative Adversarial Network for Abstractive Text Summarization
Image credit: Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
Libraries
Use these libraries to find Abstractive Text Summarization models and implementationsDatasets
Subtasks
Latest papers with no code
Beyond English: The Impact of Prompt Translation Strategies across Languages and Tasks in Multilingual LLMs
Consequently, the optimal pre-translation strategy for various multilingual settings and tasks remains unclear.
CoCoA: A Generalized Approach to Uncertainty Quantification by Integrating Confidence and Consistency of LLM Outputs
Uncertainty quantification (UQ) methods for Large Language Models (LLMs) encompasses a variety of approaches, with two major types being particularly prominent: information-based, which focus on model confidence expressed as token probabilities, and consistency-based, which assess the semantic relationship between multiple outputs generated using repeated sampling.
Can summarization approximate simplification? A gold standard comparison
This study explores the overlap between text summarization and simplification outputs.
Abstractive Text Summarization for Bangla Language Using NLP and Machine Learning Approaches
Text summarization involves reducing extensive documents to short sentences that encapsulate the essential ideas.
Abstractive Text Summarization for Contemporary Sanskrit Prose: Issues and Challenges
The key research question that this thesis investigates is what the challenges in developing an abstractive TS for Sanskrit.
Survey of Pseudonymization, Abstractive Summarization & Spell Checker for Hindi and Marathi
India's vast linguistic diversity presents unique challenges and opportunities for technological advancement, especially in the realm of Natural Language Processing (NLP).
Length Controlled Generation for Black-box LLMs
Large language models (LLMs) have demonstrated impressive instruction following capabilities, while still struggling to accurately manage the length of the generated text, which is a fundamental requirement in many real-world applications.
How Private are Language Models in Abstractive Summarization?
Language models (LMs) have shown outstanding performance in text summarization including sensitive domains such as medicine and law.
DocSum: Domain-Adaptive Pre-training for Document Abstractive Summarization
Abstractive summarization has made significant strides in condensing and rephrasing large volumes of text into coherent summaries.
Guide-to-Explain for Controllable Summarization
Recently, large language models (LLMs) have demonstrated remarkable performance in abstractive summarization tasks.