Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering

We present a new kind of question answering dataset, OpenBookQA, modeled after open book exams for assessing human understanding of a subject. The open book that comes with our questions is a set of 1329 elementary level science facts. Roughly 6000 questions probe an understanding of these facts and their application to novel situations. This requires combining an open book fact (e.g., metals conduct electricity) with broad common knowledge (e.g., a suit of armor is made of metal) obtained from other sources. While existing QA datasets over documents or knowledge bases, being generally self-contained, focus on linguistic understanding, OpenBookQA probes a deeper understanding of both the topic---in the context of common knowledge---and the language it is expressed in. Human performance on OpenBookQA is close to 92%, but many state-of-the-art pre-trained QA methods perform surprisingly poorly, worse than several simple neural baselines we develop. Our oracle experiments designed to circumvent the knowledge retrieval bottleneck demonstrate the value of both the open book and additional facts. We leave it as a challenge to solve the retrieval problem in this multi-hop setting and to close the large gap to human performance.

PDF Abstract EMNLP 2018 PDF EMNLP 2018 Abstract

Datasets


Introduced in the Paper:

OpenBookQA

Used in the Paper:

ConceptNet StoryCloze Worldtree TQA

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering OpenBookQA BiLSTM max-out question-match (science fact + common knowledge fact) Accuracy 76.9 # 22
Question Answering OpenBookQA BiLSTM max-out question-match (WordNet + science fact) Accuracy 56.3 # 29
Question Answering OpenBookQA BiLSTM max-out question-match (with a science fact) Accuracy 55.8 # 31
Question Answering OpenBookQA Random chance baseline Accuracy 25 # 41

Methods


No methods listed for this paper. Add relevant methods here