News Summarization

13 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Sentence Centrality Revisited for Unsupervised Summarization

mswellhao/PacSum ACL 2019

Single document summarization has enjoyed renewed interests in recent years thanks to the popularity of neural network models and the availability of large-scale datasets.

Earlier Isn't Always Better: Sub-aspect Analysis on Corpus and System Biases in Summarization

dykang/biassum IJCNLP 2019

We find that while position exhibits substantial bias in news articles, this is not the case, for example, with academic papers and meeting minutes.

Exploring Content Selection in Summarization of Novel Chapters

manestay/novel-chapter-dataset ACL 2020

We present a new summarization task, generating summaries of novel chapters using summary/chapter pairs from online study guides.

Bengali Abstractive News Summarization(BANS): A Neural Attention Approach

Prithwiraj12/Bengali-Deep-News-Summarization 3 Dec 2020

Our proposed system deploys a local attention-based model that produces a long sequence of words with lucid and human-like generated sentences with noteworthy information of the original document.

Generating abstractive summaries of Lithuanian news articles using a transformer model

LukasStankevicius/Generating-abstractive-summaries-of-Lithuanian-news-articles-using-a-transformer-model 23 Apr 2021

In this work, we train the first monolingual Lithuanian transformer model on a relatively large corpus of Lithuanian news articles and compare various output decoding algorithms for abstractive news summarization.

The Summary Loop: Learning to Write Abstractive Summaries Without Examples

cannylab/summary_loop ACL 2020

This work presents a new approach to unsupervised abstractive summarization based on maximizing a combination of coverage and fluency for a given length constraint.

MiRANews: Dataset and Benchmarks for Multi-Resource-Assisted News Summarization

xinnuoxu/miranews Findings (EMNLP) 2021

We show via data analysis that it's not only the models which are to blame: more than 27% of facts mentioned in the gold summaries of MiRANews are better grounded on assisting documents than in the main source articles.

Meeting Summarization with Pre-training and Clustering Methods

wxj77/meetingsummarization 16 Nov 2021

Lastly, we compare the performance of our baseline models with BART, a state-of-the-art language model that is effective for summarization.

IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation

gsarti/it5 7 Mar 2022

The T5 model and its unified text-to-text paradigm contributed in advancing the state-of-the-art for many natural language processing tasks.