R2-D2: A Modular Baseline for Open-Domain Question Answering

This work presents a novel four-stage open-domain QA pipeline R2-D2 (Rank twice, reaD twice). The pipeline is composed of a retriever, passage reranker, extractive reader, generative reader and a mechanism that aggregates the final prediction from all system's components. We demonstrate its strength across three open-domain QA datasets: NaturalQuestions, TriviaQA and EfficientQA, surpassing state-of-the-art on the first two. Our analysis demonstrates that: (i) combining extractive and generative reader yields absolute improvements up to 5 exact match and it is at least twice as effective as the posterior averaging ensemble of the same models with different parameters, (ii) the extractive reader with fewer parameters can match the performance of the generative reader on extractive QA datasets.

PDF Abstract Findings (EMNLP) 2021 PDF Findings (EMNLP) 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Open-Domain Question Answering Natural Questions R2-D2 \w HN-DPR Exact Match 55.9 # 2
Question Answering Natural Questions R2-D2 (full) EM 55.9 # 4
Passage Retrieval Natural Questions DPR+ELECTRA-large-extreader-reranker Precision@20 85.26 # 2
Precision@100 88.25 # 5
Passage Retrieval Natural Questions DPR+RoBERTa-base-crossencoder-reranker Precision@20 84.46 # 4
Precision@100 88.03 # 6
Question Answering Natural Questions (long) R2-D2 \w HN-DPR EM 55.9 # 3

Methods


No methods listed for this paper. Add relevant methods here