Cut to the Chase: A Context Zoom-in Network for Reading Comprehension

In recent years many deep neural networks have been proposed to solve Reading Comprehension (RC) tasks. Most of these models suffer from reasoning over long documents and do not trivially generalize to cases where the answer is not present as a span in a given document. We present a novel neural-based architecture that is capable of extracting relevant regions based on a given question-document pair and generating a well-formed answer. To show the effectiveness of our architecture, we conducted several experiments on the recently proposed and challenging RC dataset {`}NarrativeQA{'}. The proposed architecture outperforms state-of-the-art results by 12.62{\%} (ROUGE-L) relative improvement.

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Question Answering NarrativeQA ConZNet BLEU-1 42.76 # 5
BLEU-4 22.49 # 3
METEOR 19.24 # 4
Rouge-L 46.67 # 4


No methods listed for this paper. Add relevant methods here