Revealing the Importance of Semantic Retrieval for Machine Reading at Scale

IJCNLP 2019  ·  Yixin Nie, Songhe Wang, Mohit Bansal ·

Machine Reading at Scale (MRS) is a challenging task in which a system is given an input query and is asked to produce a precise output by "reading" information from a large knowledge base. The task has gained popularity with its natural combination of information retrieval (IR) and machine comprehension (MC). Advancements in representation learning have led to separated progress in both IR and MC; however, very few studies have examined the relationship and combined design of retrieval and comprehension at different levels of granularity, for development of MRS systems. In this work, we give general guidelines on system design for MRS by proposing a simple yet effective pipeline system with special consideration on hierarchical semantic retrieval at both paragraph and sentence level, and their potential effects on the downstream task. The system is evaluated on both fact verification and open-domain multihop QA, achieving state-of-the-art results on the leaderboard test sets of both FEVER and HOTPOTQA. To further demonstrate the importance of semantic retrieval, we present ablation and analysis studies to quantify the contribution of neural retrieval modules at both paragraph-level and sentence-level, and illustrate that intermediate semantic retrieval modules are vital for not only effectively filtering upstream information and thus saving downstream computation, but also for shaping upstream data distribution and providing better data for downstream modeling. Code/data made publicly available at:

PDF Abstract IJCNLP 2019 PDF IJCNLP 2019 Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering HotpotQA SemanticRetrievalMRS ANS-EM 0.453 # 49
ANS-F1 0.573 # 51
SUP-EM 0.387 # 46
SUP-F1 0.708 # 47
JOINT-EM 0.251 # 46
JOINT-F1 0.476 # 49


No methods listed for this paper. Add relevant methods here