About

Shortening a set of data computationally, to create a summary that represents the most important or relevant information within the original content (Source: Wikipedia).

Benchmarks

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

Subtasks

Datasets

Greatest papers with code

Attention Is All You Need

NeurIPS 2017 tensorflow/models

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration.

ABSTRACTIVE TEXT SUMMARIZATION CONSTITUENCY PARSING MACHINE TRANSLATION

A Neural Attention Model for Abstractive Sentence Summarization

EMNLP 2015 tensorflow/models

Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build.

EXTRACTIVE TEXT SUMMARIZATION SENTENCE SUMMARIZATION

BARThez: a Skilled Pretrained French Sequence-to-Sequence Model

23 Oct 2020huggingface/transformers

We show BARThez to be very competitive with state-of-the-art BERT-based French language models such as CamemBERT and FlauBERT.

 Ranked #1 on Text Summarization on OrangeSum (using extra training data)

NATURAL LANGUAGE UNDERSTANDING SELF-SUPERVISED LEARNING TEXT SUMMARIZATION TRANSFER LEARNING

ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training

13 Jan 2020huggingface/transformers

This paper presents a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism.

Ranked #3 on Abstractive Text Summarization on CNN / Daily Mail (using extra training data)

ABSTRACTIVE TEXT SUMMARIZATION QUESTION GENERATION

PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization

ICML 2020 huggingface/transformers

Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization.

ABSTRACTIVE TEXT SUMMARIZATION

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

ACL 2020 huggingface/transformers

We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token.

ABSTRACTIVE TEXT SUMMARIZATION DENOISING MACHINE TRANSLATION NATURAL LANGUAGE INFERENCE QUESTION ANSWERING TEXT GENERATION

SummAE: Zero-Shot Abstractive Text Summarization using Length-Agnostic Auto-Encoders

2 Oct 2019google-research/google-research

We show results for extractive and human baselines to demonstrate a large abstractive gap in performance.

ABSTRACTIVE TEXT SUMMARIZATION DENOISING

Levenshtein Transformer

NeurIPS 2019 pytorch/fairseq

We further confirm the flexibility of our model by showing a Levenshtein Transformer trained by machine translation can straightforwardly be used for automatic post-editing.

AUTOMATIC POST-EDITING TEXT SUMMARIZATION