BanditSum: Extractive Summarization as a Contextual Bandit

In this work, we propose a novel method for training neural networks to perform single-document extractive summarization without heuristically-generated extractive labels. We call our approach BanditSum as it treats extractive summarization as a contextual bandit (CB) problem, where the model receives a document to summarize (the context), and chooses a sequence of sentences to include in the summary (the action). A policy gradient reinforcement learning algorithm is used to train the model to select sequences of sentences that maximize ROUGE score. We perform a series of experiments demonstrating that BanditSum is able to achieve ROUGE scores that are better than or comparable to the state-of-the-art for extractive summarization, and converges using significantly fewer update steps than competing approaches. In addition, we show empirically that BanditSum performs significantly better than competing approaches when good summary sentences appear late in the source document.

PDF Abstract EMNLP 2018 PDF EMNLP 2018 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Extractive Text Summarization CNN / Daily Mail BanditSum ROUGE-2 18.7 # 11
ROUGE-1 41.5 # 10
ROUGE-L 37.6 # 9

Methods


No methods listed for this paper. Add relevant methods here