Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering

EACL 2021  ·  Gautier Izacard, Edouard Grave ·

Generative models for open domain question answering have proven to be competitive, without resorting to external knowledge. While promising, this approach requires to use models with billions of parameters, which are expensive to train and query. In this paper, we investigate how much these models can benefit from retrieving text passages, potentially containing evidence. We obtain state-of-the-art results on the Natural Questions and TriviaQA open benchmarks. Interestingly, we observe that the performance of this method significantly improves when increasing the number of retrieved passages. This is evidence that generative models are good at aggregating and combining evidence from multiple passages.

PDF Abstract EACL 2021 PDF EACL 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering ConditionalQA FiD Conditional (answers) 45.2 / 49.7 # 1
Conditional (w/ conditions) 4.7 / 5.8 # 1
Overall (answers) 44.4 / 50.8 # 1
Overall (w/ conditions) 35.0 / 40.6 # 1
Question Answering Natural Questions FiD-KD (full) EM 54.7 # 5
Question Answering Natural Questions FID (full) EM 51.4 # 8
Question Answering TriviaQA Fusion-in-Decoder (large) EM 67.6 # 26

Methods


No methods listed for this paper. Add relevant methods here