Question answering (QA) tasks have been posed using a variety of formats, such as extractive span selection, multiple choice, etc. This has led to format-specialized models, and even to an implicit division in the QA community. We argue that such boundaries are artificial and perhaps unnecessary, given the reasoning abilities we seek to teach are not governed by the format. As evidence, we use the latest advances in language modeling to build a single pre-trained QA model, UnifiedQA, that performs surprisingly well across 17 QA datasets spanning 4 diverse formats. UnifiedQA performs on par with 9 different models that were trained on individual datasets themselves. Even when faced with 12 unseen datasets of observed formats, UnifiedQA performs surprisingly well, showing strong generalization from its out-of-format training data. Finally, simply fine-tuning this pre-trained QA model into specialized models results in a new state of the art on 6 datasets, establishing UnifiedQA as a strong starting point for building QA systems.

PDF Abstract Findings of 2020 PDF Findings of 2020 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Question Answering CommonsenseQA UnifiedQA Test Accuracy 79.1 # 1
Multi-task Language Understanding MMLU UnifiedQA Humanities 45.6 # 9
Average (%) 48.9 # 24
Parameters (Billions) 11 # 13
STEM 40.2 # 13
Social Sciences 56.6 # 8
Other 54.6 # 8

Results from Other Papers

Task Dataset Model Metric Name Metric Value Rank Uses Extra
Training Data
Source Paper Compare
Common Sense Reasoning CommonsenseQA UnifiedQA* Khashabi et al. (2020) Accuracy 79.1 # 5


No methods listed for this paper. Add relevant methods here