Summary Level Training of Sentence Rewriting for Abstractive Summarization

WS 2019  ·  Sanghwan Bae, Taeuk Kim, Jihoon Kim, Sang-goo Lee ·

As an attempt to combine extractive and abstractive summarization, Sentence Rewriting models adopt the strategy of extracting salient sentences from a document first and then paraphrasing the selected ones to generate a summary. However, the existing models in this framework mostly rely on sentence-level rewards or suboptimal labels, causing a mismatch between a training objective and evaluation metric. In this paper, we present a novel training signal that directly maximizes summary-level ROUGE scores through reinforcement learning. In addition, we incorporate BERT into our model, making good use of its ability on natural language understanding. In extensive experiments, we show that a combination of our proposed model and training procedure obtains new state-of-the-art performance on both CNN/Daily Mail and New York Times datasets. We also demonstrate that it generalizes better on DUC-2002 test set.

PDF Abstract WS 2019 PDF WS 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Extractive Text Summarization CNN / Daily Mail BERT-ext + RL ROUGE-2 19.87 # 6
ROUGE-1 42.76 # 5
ROUGE-L 39.11 # 4
Abstractive Text Summarization CNN / Daily Mail BERT-ext + abs + RL + rerank ROUGE-1 41.90 # 28
ROUGE-2 19.08 # 30
ROUGE-L 39.64 # 27

Methods