Text Summarization with Pretrained Encoders

IJCNLP 2019  ·  Yang Liu, Mirella Lapata ·

Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings. Our code is available at https://github.com/nlpyang/PreSumm

PDF Abstract IJCNLP 2019 PDF IJCNLP 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Extractive Document Summarization CNN / Daily Mail BertSumExt ROUGE-1 43.85 # 1
ROUGE-2 20.34 # 1
ROUGE-L 39.9 # 1
Abstractive Text Summarization CNN / Daily Mail BertSumExtAbs ROUGE-1 42.13 # 27
ROUGE-2 19.6 # 27
ROUGE-L 39.18 # 28
Document Summarization CNN / Daily Mail BertSumExt ROUGE-1 43.85 # 9
ROUGE-2 20.34 # 11
ROUGE-L 39.9 # 12
Text Summarization X-Sum BertSumExtAbs ROUGE-1 38.81 # 8
ROUGE-2 16.50 # 11
ROUGE-3 31.27 # 4

Methods