Abstractive Text Summarization

225 papers with code • 14 benchmarks • 38 datasets

Abstractive Text Summarization is the task of generating a short and concise summary that captures the salient ideas of the source text. The generated summaries potentially contain new phrases and sentences that may not appear in the source text.

Source: Generative Adversarial Network for Abstractive Text Summarization

Image credit: Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond

Libraries

Use these libraries to find Abstractive Text Summarization models and implementations

Multi-LexSum: Real-World Summaries of Civil Rights Lawsuits at Multiple Granularities

multilexsum/dataset 22 Jun 2022

With the advent of large language models, methods for abstractive summarization have made great strides, creating potential for use in applications to aid knowledge workers processing unwieldy document collections.

1
22 Jun 2022

Indian Legal Text Summarization: A Text Normalisation-based Approach

satyajit1910/ilds 13 Jun 2022

The authors experimented with two state-of-the-art domain-independent models for legal text summarization, namely BART and PEGASUS.

0
13 Jun 2022

Understanding Factual Errors in Summarization: Errors, Summarizers, Datasets, Error Detectors

liyan06/aggrefact 25 May 2022

The propensity of abstractive summarization systems to make factual errors has been the subject of significant study, including work on models to detect factual errors and annotation of errors in current systems' outputs.

6
25 May 2022

Lossless Acceleration for Seq2seq Generation with Aggressive Decoding

microsoft/unilm 20 May 2022

We study lossless acceleration for seq2seq generation with a novel decoding algorithm -- Aggressive Decoding.

5,784
20 May 2022

FactPEGASUS: Factuality-Aware Pre-training and Fine-tuning for Abstractive Summarization

meetdavidwan/factpegasus 16 May 2022

We present FactPEGASUS, an abstractive summarization model that addresses the problem of factuality during pre-training and fine-tuning: (1) We augment the sentence selection strategy of PEGASUS's (Zhang et al., 2020) pre-training objective to create pseudo-summaries that are both important and factual; (2) We introduce three complementary components for fine-tuning.

17
16 May 2022

ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language Generation

vietai/vit5 13 May 2022

In this work, we perform exhaustive experiments on both Vietnamese Abstractive Summarization and Named Entity Recognition, validating the performance of ViT5 against many other pretrained Transformer-based encoder-decoder models.

6
13 May 2022

Falsesum: Generating Document-level NLI Examples for Recognizing Factual Inconsistency in Summarization

joshbambrick/falsesum 12 May 2022

In this work, we show that NLI models can be effective for this task when the training data is augmented with high-quality task-oriented examples.

3
12 May 2022

Masked Summarization to Generate Factually Inconsistent Summaries for Improved Factual Consistency Checking

hwanheelee1993/mfma 4 May 2022

To this end, the latest approach is to train a factual consistency classifier on factually consistent and inconsistent summaries.

2
04 May 2022

Efficient Few-Shot Fine-Tuning for Opinion Summarization

amazon-research/adasum 4 May 2022

In the same vein, we pre-train the adapters in a query-based manner on customer reviews and then fine-tune them on annotated datasets.

1
04 May 2022

Two New Datasets for Italian-Language Abstractive Text Summarization

nicolalandro/summarization Information 2022

Text summarization aims to produce a short summary containing relevant parts from a given text.

2
29 Apr 2022