Query-Reduction Networks for Question Answering

14 Jun 2016  ·  Minjoon Seo, Sewon Min, Ali Farhadi, Hannaneh Hajishirzi ·

In this paper, we study the problem of question answering when reasoning over multiple facts is required. We propose Query-Reduction Network (QRN), a variant of Recurrent Neural Network (RNN) that effectively handles both short-term (local) and long-term (global) sequential dependencies to reason over multiple facts. QRN considers the context sentences as a sequence of state-changing triggers, and reduces the original query to a more informed query as it observes each trigger (context sentence) through time. Our experiments show that QRN produces the state-of-the-art results in bAbI QA and dialog tasks, and in a real goal-oriented dialog dataset. In addition, QRN formulation allows parallelization on RNN's time axis, saving an order of magnitude in time complexity for training and inference.

PDF Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering bAbi QRN Accuracy (trained on 10k) 99.7% # 2
Accuracy (trained on 1k) 90.1% # 1
Mean Error Rate 0.3% # 1
Procedural Text Understanding ProPara QRN Seo et al. (2017) Sentence-level Cat 1 (Accuracy) 52.4 # 6
Sentence-level Cat 2 (Accuracy) 15.5 # 7
Sentence-level Cat 3 (Accuracy) 10.9 # 5
Document level (P) 55.5 # 5
Document level (R) 31.3 # 6
Document level (F1) 40.0 # 6


No methods listed for this paper. Add relevant methods here