TLDR9+: A Large Scale Resource for Extreme Summarization of Social Media Posts

Recent models in developing summarization systems consist of millions of parameters and the model performance is highly dependent on the abundance of training data. While most existing summarization corpora contain data in the order of thousands to one million, generation of large-scale summarization datasets in order of couple of millions is yet to be explored. Practically, more data is better at generalizing the training patterns to unseen data. In this paper, we introduce TLDR9+ -- a large-scale summarization dataset -- containing over 9 million training instances extracted from Reddit discussion forum (https://github.com/sajastu/reddit_collector). This dataset is specifically gathered to perform extreme summarization (i.e., generating one-sentence summary in high compression and abstraction) and is more than twice larger than the previously proposed dataset. We go one step further and with the help of human annotations, we distill a more fine-grained dataset by sampling High-Quality instances from TLDR9+ and call it TLDRHQ dataset. We further pinpoint different state-of-the-art summarization models on our proposed datasets.

PDF Abstract EMNLP (newsum) 2021 PDF EMNLP (newsum) 2021 Abstract

Datasets


Introduced in the Paper:

TLDR9+

Used in the Paper:

Reddit TIFU

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Extreme Summarization TLDR9+ ORACLE-EXT RG-1(%) 30.26 # 1
RG-2(%) 9.74 # 1
RG-L(%) 20.60 # 1
Extreme Summarization TLDR9+ BART RG-1(%) 23.59 # 2
RG-2(%) 9.69 # 2
RG-L(%) 18.62 # 2
Extreme Summarization TLDR9+ BERTSUMABS RG-1(%) 23.05 # 3
RG-2(%) 9.48 # 3
RG-L(%) 18.07 # 3
Extreme Summarization TLDR9+ BERTSUMEXT RG-1(%) 20.94 # 4
RG-2(%) 4.98 # 4
RG-L(%) 14.48 # 4

Methods


No methods listed for this paper. Add relevant methods here