Text Summarization with Pretrained Encoders

IJCNLP 2019  ·  Yang Liu, Mirella Lapata ·

Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings. Our code is available at https://github.com/nlpyang/PreSumm

PDF Abstract IJCNLP 2019 PDF IJCNLP 2019 Abstract

Results from the Paper

Ranked #4 on Extractive Text Summarization on CNN / Daily Mail (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Document Summarization CNN / Daily Mail BertSumExt ROUGE-1 43.85 # 6
ROUGE-2 20.34 # 9
ROUGE-L 39.90 # 9
Abstractive Text Summarization CNN / Daily Mail BertSumExtAbs ROUGE-1 42.13 # 17
ROUGE-2 19.60 # 17
ROUGE-L 39.18 # 18
Extractive Text Summarization CNN / Daily Mail BertSumExt ROUGE-2 20.34 # 4
ROUGE-1 43.85 # 4
ROUGE-L 39.90 # 4
Text Summarization X-Sum BertSumExtAbs ROUGE-1 38.81 # 5
ROUGE-2 16.50 # 5
ROUGE-3 31.27 # 3