Text Summarization with Pretrained Encoders

IJCNLP 2019 Yang LiuMirella Lapata

Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models... (read more)

PDF Abstract

Evaluation Results from the Paper


TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK USES EXTRA
TRAINING DATA
COMPARE
Extractive Document Summarization CNN / Daily Mail BertSumExt ROUGE-2 20.34 # 1
Extractive Document Summarization CNN / Daily Mail BertSumExt ROUGE-1 43.85 # 2
Extractive Document Summarization CNN / Daily Mail BertSumExt ROUGE-L 39.90 # 1
Document Summarization CNN / Daily Mail BertSumExt ROUGE-1 43.85 # 1
Document Summarization CNN / Daily Mail BertSumExt ROUGE-2 20.34 # 3
Document Summarization CNN / Daily Mail BertSumExt ROUGE-L 39.90 # 3
Abstractive Text Summarization CNN / Daily Mail BertSumExtAbs ROUGE-1 42.13 # 2
Abstractive Text Summarization CNN / Daily Mail BertSumExtAbs ROUGE-2 19.60 # 2
Abstractive Text Summarization CNN / Daily Mail BertSumExtAbs ROUGE-L 39.18 # 2