Document Summarization

192 papers with code • 7 benchmarks • 28 datasets

Automatic Document Summarization is the task of rewriting a document into its shorter form while still retaining its important content. The most popular two paradigms are extractive approaches and abstractive approaches. Extractive approaches generate summaries by extracting parts of the original document (usually sentences), while abstractive methods may generate new words or phrases which are not in the original document.

Source: HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization

Libraries

Use these libraries to find Document Summarization models and implementations

Most implemented papers

Get To The Point: Summarization with Pointer-Generator Networks

abisee/pointer-generator ACL 2017

Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text).

Text Summarization with Pretrained Encoders

nlpyang/PreSumm IJCNLP 2019

For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not).

Language Models are Unsupervised Multitask Learners

openai/gpt-2 Preprint 2019

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.

Unified Language Model Pre-training for Natural Language Understanding and Generation

microsoft/unilm NeurIPS 2019

This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks.

GLM: General Language Model Pretraining with Autoregressive Blank Infilling

THUDM/GLM ACL 2022

On a wide range of tasks across NLU, conditional and unconditional generation, GLM outperforms BERT, T5, and GPT given the same model sizes and data, and achieves the best performance from a single pretrained model with 1. 25x parameters of BERT Large , demonstrating its generalizability to different downstream tasks.

SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents

kedz/nnsum 14 Nov 2016

We present SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model for extractive summarization of documents and show that it achieves performance better than or comparable to state-of-the-art.

Bottom-Up Abstractive Summarization

sebastianGehrmann/bottom-up-summary EMNLP 2018

We use this selector as a bottom-up attention step to constrain the model to likely phrases.

Generating Wikipedia by Summarizing Long Sequences

tensorflow/tensor2tensor ICLR 2018

We show that generating English Wikipedia articles can be approached as a multi- document summarization of source documents.

Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization

shashiongithub/XSum EMNLP 2018

We introduce extreme summarization, a new single-document summarization task which does not favor extractive strategies and calls for an abstractive modeling approach.

Scoring Sentence Singletons and Pairs for Abstractive Summarization

ucfnlp/summarization-sing-pair-mix ACL 2019

There is thus a crucial gap between sentence selection and fusion to support summarizing by both compressing single sentences and fusing pairs.