Making Neural QA as Simple as Possible but not Simpler

CONLL 2017  ·  Dirk Weissenborn, Georg Wiese, Laura Seiffe ·

Recent development of large-scale question answering (QA) datasets triggered a substantial amount of research into end-to-end neural architectures for QA. Increasingly complex systems have been conceived without comparison to simpler neural baseline systems that would justify their complexity. In this work, we propose a simple heuristic that guides the development of neural baseline systems for the extractive QA task. We find that there are two ingredients necessary for building a high-performing neural QA system: first, the awareness of question words while processing the context and second, a composition function that goes beyond simple bag-of-words modeling, such as recurrent neural networks. Our results show that FastQA, a system that meets these two requirements, can achieve very competitive performance compared with existing models. We argue that this surprising finding puts results of previous systems and the complexity of recent QA datasets into perspective.

PDF Abstract CONLL 2017 PDF CONLL 2017 Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering NewsQA FastQAExt F1 56.1 # 8
EM 43.7 # 5
Question Answering SQuAD1.1 FastQA EM 68.436 # 165
F1 77.070 # 176
Question Answering SQuAD1.1 FastQAExt EM 70.849 # 155
F1 78.857 # 164
Question Answering SQuAD1.1 dev FastQAExt (beam-size 5) EM 70.3 # 37
F1 78.5 # 41


No methods listed for this paper. Add relevant methods here