Answering Any-hop Open-domain Questions with Iterative Document Reranking

16 Sep 2020  ·  Ping Nie, Yuyu Zhang, Arun Ramamurthy, Le Song ·

Existing approaches for open-domain question answering (QA) are typically designed for questions that require either single-hop or multi-hop reasoning, which make strong assumptions of the complexity of questions to be answered. Also, multi-step document retrieval often incurs higher number of relevant but non-supporting documents, which dampens the downstream noise-sensitive reader module for answer extraction. To address these challenges, we propose a unified QA framework to answer any-hop open-domain questions, which iteratively retrieves, reranks and filters documents, and adaptively determines when to stop the retrieval process. To improve the retrieval accuracy, we propose a graph-based reranking model that perform multi-document interaction as the core of our iterative reranking framework. Our method consistently achieves performance comparable to or better than the state-of-the-art on both single-hop and multi-hop open-domain QA datasets, including Natural Questions Open, SQuAD Open, and HotpotQA.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering HotpotQA DDRQA ANS-EM 0.625 # 14
ANS-F1 0.759 # 13
SUP-EM 0.510 # 17
SUP-F1 0.789 # 17
JOINT-EM 0.360 # 22
JOINT-F1 0.639 # 16

Methods


No methods listed for this paper. Add relevant methods here