UnitedQA: A Hybrid Approach for Open Domain Question Answering

To date, most of recent work under the retrieval-reader framework for open-domain QA focuses on either extractive or generative reader exclusively. In this paper, we study a hybrid approach for leveraging the strengths of both models. We apply novel techniques to enhance both extractive and generative readers built upon recent pretrained neural language models, and find that proper training methods can provide large improvement over previous state-of-the-art models. We demonstrate that a simple hybrid approach by combining answers from both readers can efficiently take advantages of extractive and generative answer inference strategies and outperforms single models as well as homogeneous ensembles. Our approach outperforms previous state-of-the-art models by 3.3 and 2.7 points in exact match on NaturalQuestions and TriviaQA respectively.

PDF Abstract ACL 2021 PDF ACL 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering EfficientQA dev UnitedQA Accuracy 54.1 # 1
Question Answering EfficientQA test UnitedQA Accuracy 54 # 1
Open-Domain Question Answering Natural Questions UnitedQA (Hybrid) Exact Match 54.7 # 4
Question Answering Natural Questions (long) UnitedQA (Hybrid) EM 54.7 # 5
Open-Domain Question Answering TriviaQA UnitedQA (Hybrid) Exact Match 70.5 # 1
Question Answering TriviaQA UnitedQA (Hybrid reader) F1 70.3 # 7

Methods


No methods listed for this paper. Add relevant methods here