SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization

This paper introduces the SAMSum Corpus, a new dataset with abstractive dialogue summaries. We investigate the challenges it poses for automated summarization by testing several models and comparing their results with those obtained on a corpus of news articles. We show that model-generated summaries of dialogues achieve higher ROUGE scores than the model-generated summaries of news -- in contrast with human evaluators' judgement. This suggests that a challenging task of abstractive dialogue summarization requires dedicated models and non-standard quality measures. To our knowledge, our study is the first attempt to introduce a high-quality chat-dialogues corpus, manually annotated with abstractive summarizations, which can be used by the research community for further studies.

PDF Abstract WS 2019 PDF WS 2019 Abstract


Introduced in the Paper:


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
dialogue summary SAMSum DynamicConv + GPT2 emb. ROUGE-1 45.41 # 1
ROUGE-2 20.65 # 1
ROUGE-L 41.45 # 1


No methods listed for this paper. Add relevant methods here