Mind the Gap! Injecting Commonsense Knowledge for Abstractive Dialogue Summarization

In this paper, we propose to leverage the unique characteristics of dialogues sharing commonsense knowledge across participants, to resolve the difficulties in summarizing them. We present SICK, a framework that uses commonsense inferences as additional context. Compared to previous work that solely relies on the input dialogue, SICK uses an external knowledge model to generate a rich set of commonsense inferences and selects the most probable one with a similarity-based selection method. Built upon SICK, SICK++ utilizes commonsense as supervision, where the task of generating commonsense inferences is added upon summarizing the dialogue in a multi-task learning setting. Experimental results show that with injected commonsense knowledge, our framework generates more informative and consistent summaries than existing methods.

PDF Abstract COLING 2022 PDF COLING 2022 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text Summarization DialogSum SICK Rouge1 46.26 # 1
Rouge2 20.95 # 1
RougeL 41.05 # 1
BertScore 71.30 # 1
Text Summarization SAMSum Corpus SICK ROUGE-1 53.73 # 4
ROUGE-2 28.81 # 4
ROUGE-L 49.5 # 2
BertScoreF1 71.92 # 1


No methods listed for this paper. Add relevant methods here