Systematically Exploring Redundancy Reduction in Summarizing Long Documents

Our analysis of large summarization datasets indicates that redundancy is a very serious problem when summarizing long documents. Yet, redundancy reduction has not been thoroughly investigated in neural summarization. In this work, we systematically explore and compare different ways to deal with redundancy when summarizing long documents. Specifically, we organize the existing methods into categories based on when and how the redundancy is considered. Then, in the context of these categories, we propose three additional methods balancing non-redundancy and importance in a general and flexible way. In a series of experiments, we show that our proposed methods achieve the state-of-the-art with respect to ROUGE scores on two scientific paper datasets, Pubmed and arXiv, while reducing redundancy significantly.

PDF Abstract Asian Chapter 2020 PDF Asian Chapter 2020 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text Summarization Arxiv HEP-TH citation graph ExtSum-LG+RdLoss ROUGE-1 44.01 # 18
ROUGE-2 17.79 # 14
ROUGE-L 39.09 # 14
Text Summarization Arxiv HEP-TH citation graph ExtSum-LG+MMR-Select+ ROUGE-1 43.87 # 19
ROUGE-2 17.5 # 17
ROUGE-L 38.97 # 15
Text Summarization Pubmed ExtSum-LG+MMR-Select+ ROUGE-1 45.39 # 15
ROUGE-2 20.37 # 13
ROUGE-L 40.99 # 12
Text Summarization Pubmed ExtSum-LG+RdLoss ROUGE-1 45.3 # 16
ROUGE-2 20.42 # 11
ROUGE-L 40.95 # 13

Methods


No methods listed for this paper. Add relevant methods here