Coarse-to-Fine Attention Models for Document Summarization

WS 2017  ·  Jeffrey Ling, Alex Rush, er ·

Sequence-to-sequence models with attention have been successful for a variety of NLP problems, but their speed does not scale well for tasks with long source sequences such as document summarization. We propose a novel coarse-to-fine attention model that hierarchically reads a document, using coarse attention to select top-level chunks of text and fine attention to read the words of the chosen chunks. While the computation for training standard attention models scales linearly with source sequence length, our method scales with the number of top-level chunks and can handle much longer sequences. Empirically, we find that while coarse-to-fine attention models lag behind state-of-the-art baselines, our method achieves the desired behavior of sparsely attending to subsets of the document for generation.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Document Summarization CNN / Daily Mail C2F + ALTERNATE PPL 23.6 # 1
ROUGE-1 31.1 # 25
ROUGE-2 15.4 # 24
ROUGE-L 28.8 # 25

Methods