DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs

Reading comprehension has recently seen rapid progress, with systems matching humans on the most popular datasets for the task. However, a large body of work has highlighted the brittleness of these systems, showing that there is much work left to be done. We introduce a new English reading comprehension benchmark, DROP, which requires Discrete Reasoning Over the content of Paragraphs. In this crowdsourced, adversarially-created, 96k-question benchmark, a system must resolve references in a question, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or sorting). These operations require a much more comprehensive understanding of the content of paragraphs than what was necessary for prior datasets. We apply state-of-the-art methods from both the reading comprehension and semantic parsing literature on this dataset and show that the best systems only achieve 32.7% F1 on our generalized accuracy metric, while expert human performance is 96.0%. We additionally present a new model that combines reading comprehension methods with simple numerical reasoning to achieve 47.0% F1.

PDF Abstract NAACL 2019 PDF NAACL 2019 Abstract

Datasets


Introduced in the Paper:

DROP

Used in the Paper:

WikiTableQuestions

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering DROP Test NAQA Net F1 47.01 # 14
Question Answering DROP Test BERT F1 32.7 # 16

Methods


No methods listed for this paper. Add relevant methods here