Improving Neural Abstractive Document Summarization with Explicit Information Selection Modeling

EMNLP 2018  ·  Wei Li, Xinyan Xiao, Yajuan Lyu, Yuanzhuo Wang ·

Information selection is the most important component in document summarization task. In this paper, we propose to extend the basic neural encoding-decoding framework with an information selection layer to explicitly model and optimize the information selection process in abstractive document summarization. Specifically, our information selection layer consists of two parts: gated global information filtering and local sentence selection. Unnecessary information in the original document is first globally filtered, then salient sentences are selected locally while generating each summary sentence sequentially. To optimize the information selection process directly, distantly-supervised training guided by the golden summary is also imported. Experimental results demonstrate that the explicit modeling and optimizing of the information selection process improves document summarization performance significantly, which enables our model to generate more informative and concise summaries, and thus significantly outperform state-of-the-art neural abstractive methods.

PDF Abstract


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Abstractive Text Summarization CNN / Daily Mail Li et al. ROUGE-1 41.54 # 25
ROUGE-2 18.18 # 30
ROUGE-L 36.47 # 40


No methods listed for this paper. Add relevant methods here