Ensure the Correctness of the Summary: Incorporate Entailment Knowledge into Abstractive Sentence Summarization

In this paper, we investigate the sentence summarization task that produces a summary from a source sentence. Neural sequence-to-sequence models have gained considerable success for this task, while most existing approaches only focus on improving the informativeness of the summary, which ignore the correctness, i.e., the summary should not contain unrelated information with respect to the source sentence. We argue that correctness is an essential requirement for summarization systems. Considering a correct summary is semantically entailed by the source sentence, we incorporate entailment knowledge into abstractive summarization models. We propose an entailment-aware encoder under multi-task framework (i.e., summarization generation and entailment recognition) and an entailment-aware decoder by entailment Reward Augmented Maximum Likelihood (RAML) training. Experiment results demonstrate that our models significantly outperform baselines from the aspects of informativeness and correctness.

PDF Abstract COLING 2018 PDF COLING 2018 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Text Summarization DUC 2004 Task 1 Seq2seq + selective + MTL + ERAM ROUGE-1 29.33 # 7
ROUGE-2 10.24 # 8
ROUGE-L 25.24 # 8
Text Summarization GigaWord Seq2seq + selective + MTL + ERAM ROUGE-1 35.33 # 35
ROUGE-2 17.27 # 32
ROUGE-L 33.19 # 35

Methods


No methods listed for this paper. Add relevant methods here