A Deep Reinforced Model for Abstractive Summarization

ICLR 2018  ·  Romain Paulus, Caiming Xiong, Richard Socher ·

Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.

PDF Abstract ICLR 2018 PDF ICLR 2018 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Document Summarization CNN / Daily Mail ML + RL (Paulus et al., 2017) ROUGE-1 39.87 # 20
ROUGE-2 15.82 # 23
ROUGE-L 36.90 # 17
Document Summarization CNN / Daily Mail ML + Intra-Attention (Paulus et al., 2017) ROUGE-1 38.30 # 24
ROUGE-2 14.81 # 25
ROUGE-L 35.49 # 24
Text Summarization CNN / Daily Mail (Anonymized) ML+RL, with intra-attention ROUGE-1 39.87 # 6
ROUGE-2 15.82 # 9
ROUGE-L 36.90 # 6

Methods


No methods listed for this paper. Add relevant methods here