Extractive Summarization of Long Documents by Combining Global and Local Context

IJCNLP 2019  ·  Wen Xiao, Giuseppe Carenini ·

In this paper, we propose a novel neural single document extractive summarization model for long documents, incorporating both the global context of the whole document and the local context within the current topic. We evaluate the model on two datasets of scientific papers, Pubmed and arXiv, where it outperforms previous work, both extractive and abstractive models, on ROUGE-1, ROUGE-2 and METEOR scores. We also show that, consistently with our goal, the benefits of our method become stronger as we apply it to longer documents. Rather surprisingly, an ablation study indicates that the benefits of our model seem to come exclusively from modeling the local context, even for the longest documents.

PDF Abstract IJCNLP 2019 PDF IJCNLP 2019 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Text Summarization Arxiv HEP-TH citation graph ExtSum-LG ROUGE-1 43.58 # 20
ROUGE-2 17.37 # 18
Text Summarization Pubmed ExtSum-LG ROUGE-1 44.81 # 19
ROUGE-2 19.74 # 16


No methods listed for this paper. Add relevant methods here