A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents

Neural abstractive summarization models have led to promising results in summarizing relatively short documents. We propose the first model for abstractive summarization of single, longer-form documents (e.g., research papers). Our approach consists of a new hierarchical encoder that models the discourse structure of a document, and an attentive discourse-aware decoder to generate the summary. Empirical results on two large-scale datasets of scientific papers show that our model significantly outperforms state-of-the-art models.

PDF Abstract NAACL 2018 PDF NAACL 2018 Abstract

Datasets


Introduced in the Paper:

arXiv Summarization Dataset

Used in the Paper:

Pubmed CNN/Daily Mail arXiv
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text Summarization arXiv Discourse ROUGE-1 35.80 # 26
Unsupervised Extractive Summarization arXiv Summarization Dataset LSA ROUGE-1 29.91 # 7
ROUGE-2 7.42 # 8
ROUGE-L 25.67 # 8
Unsupervised Extractive Summarization arXiv Summarization Dataset LexRank ROUGE-1 33.85 # 5
ROUGE-2 10.73 # 5
ROUGE-L 28.99 # 5
Unsupervised Extractive Summarization arXiv Summarization Dataset SumBasic ROUGE-1 29.47 # 9
ROUGE-2 6.95 # 9
ROUGE-L 26.30 # 7
Text Summarization Pubmed Discourse ROUGE-1 38.93 # 26
Unsupervised Extractive Summarization Pubmed LSA ROUGE-1 33.89 # 9
ROUGE-2 9.93 # 9
ROUGE-L 29.70 # 8
Unsupervised Extractive Summarization Pubmed SumBasic ROUGE-1 37.15 # 7
ROUGE-2 11.36 # 7
ROUGE-L 33.43 # 7
Unsupervised Extractive Summarization Pubmed LexRank ROUGE-1 39.19 # 4
ROUGE-2 13.89 # 5
ROUGE-L 34.59 # 4

Methods


No methods listed for this paper. Add relevant methods here