News Summarization
31 papers with code • 1 benchmarks • 4 datasets
Benchmarks
These leaderboards are used to track progress in News Summarization
Trend | Dataset | Best Model | Paper | Code | Compare |
---|
Most implemented papers
Meeting Summarization with Pre-training and Clustering Methods
Lastly, we compare the performance of our baseline models with BART, a state-of-the-art language model that is effective for summarization.
Read Top News First: A Document Reordering Approach for Multi-Document News Summarization
A common method for extractive multi-document news summarization is to re-formulate it as a single-document summarization problem by concatenating all documents as a single meta-document.
NeuS: Neutral Multi-News Summarization for Mitigating Framing Bias
Based on our discovery that title provides a good signal for framing bias, we present NeuS-TITLE that learns to neutralize news content in hierarchical order from title to article.
Podcast Summary Assessment: A Resource for Evaluating Summary Assessment Methods
The podcast summary assessment data is available.
News Summarization and Evaluation in the Era of GPT-3
Finally, we evaluate models on a setting beyond generic summarization, specifically keyword-based summarization, and show how dominant fine-tuning approaches compare to prompting.
He Said, She Said: Style Transfer for Shifting the Perspective of Dialogues
As a sample application, we demonstrate that applying perspective shifting to a dialogue summarization dataset (SAMSum) substantially improves the zero-shot performance of extractive news summarization models on this data.
Evaluating the Factual Consistency of Large Language Models Through News Summarization
To generate summaries that are factually inconsistent, we generate summaries from a suite of summarization models that we have manually annotated as factually inconsistent.
SumREN: Summarizing Reported Speech about Events in News
A primary objective of news articles is to establish the factual record for an event, frequently achieved by conveying both the details of the specified event (i. e., the 5 Ws; Who, What, Where, When and Why regarding the event) and how people reacted to it (i. e., reported statements).
Benchmarking Large Language Models for News Summarization
Large language models (LLMs) have shown promise for automatic summarization but the reasons behind their successes are poorly understood.
FactKB: Generalizable Factuality Evaluation using Language Models Enhanced with Factual Knowledge
We propose FactKB, a simple new approach to factuality evaluation that is generalizable across domains, in particular with respect to entities and relations.