WikiHow: A Large Scale Text Summarization Dataset

18 Oct 2018  ·  Mahnaz Koupaee, William Yang Wang ·

Sequence-to-sequence models have recently gained the state of the art performance in summarization. However, not too many large-scale high-quality datasets are available and almost all the available ones are mainly news articles with specific writing style. Moreover, abstractive human-style systems involving description of the content at a deeper level require data with higher levels of abstraction. In this paper, we present WikiHow, a dataset of more than 230,000 article and summary pairs extracted and constructed from an online knowledge base written by different human authors. The articles span a wide range of topics and therefore represent high diversity styles. We evaluate the performance of the existing methods on WikiHow to present its challenges and set some baselines to further improve it.

PDF Abstract

Datasets


Introduced in the Paper:

WikiHow

Used in the Paper:

CNN/Daily Mail New York Times Annotated Corpus

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text Summarization WikiHow Pointer-generator + coverage ROUGE-1 28.53 # 3
ROUGE-2 9.23 # 2
ROUGE-L 26.54 # 3

Methods


No methods listed for this paper. Add relevant methods here