Text Summarization
368 papers with code • 33 benchmarks • 87 datasets
Text Summarization is a natural language processing (NLP) task that involves condensing a lengthy text document into a shorter, more compact version while still retaining the most important information and meaning. The goal is to produce a summary that accurately represents the content of the original text in a concise form.
There are different approaches to text summarization, including extractive methods that identify and extract important sentences or phrases from the text, and abstractive methods that generate new text based on the content of the original text.
Libraries
Use these libraries to find Text Summarization models and implementationsDatasets
Subtasks
Latest papers
Accelerating Inference in Large Language Models with a Unified Layer Skipping Strategy
Recently, dynamic computation methods have shown notable acceleration for Large Language Models (LLMs) by skipping several layers of computations through elaborate heuristics or additional predictors.
On the Role of Summary Content Units in Text Summarization Evaluation
At the heart of the Pyramid evaluation method for text summarization lie human written summary content units (SCUs).
On the Benefits of Fine-Grained Loss Truncation: A Case Study on Factuality in Summarization
We study the behavior of the underlying losses between factual and non-factual examples, to understand and refine the performance of LT. We demonstrate that LT's performance is limited when the underlying assumption that noisy targets have higher NLL loss is not satisfied, and find that word-level NLL among entities provides better signal for distinguishing factuality.
German also Hallucinates! Inconsistency Detection in News Summaries with the Absinth Dataset
The advent of Large Language Models (LLMs) has led to remarkable progress on a wide range of natural language processing tasks.
Attribute Structuring Improves LLM-Based Evaluation of Clinical Text Summaries
Summarizing clinical text is crucial in health decision-support and clinical research.
TofuEval: Evaluating Hallucinations of LLMs on Topic-Focused Dialogue Summarization
We find that there are diverse errors and error distributions in model-generated summaries and that non-LLM based metrics can capture all error types better than LLM-based evaluators.
BESA: Pruning Large Language Models with Blockwise Parameter-Efficient Sparsity Allocation
Large language models (LLMs) have demonstrated outstanding performance in various tasks, such as text summarization, text question-answering, and etc.
TL;DR Progress: Multi-faceted Literature Exploration in Text Summarization
This paper presents TL;DR Progress, a new tool for exploring the literature on neural text summarization.
A Survey of Large Language Models in Finance (FinLLMs)
This survey provides a comprehensive overview of FinLLMs, including their history, techniques, performance, and opportunities and challenges.
The Radiation Oncology NLP Database
ROND is specifically designed to address this gap in the domain of radiation oncology, a field that offers many opportunities for NLP exploration.