Text Understanding with the Attention Sum Reader Network

Several large cloze-style context-question-answer datasets have been introduced recently: the CNN and Daily Mail news data and the Children's Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches. We present a new, simple model that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models. This makes the model particularly suitable for question-answering problems where the answer is a single word from the document. Ensemble of our models sets new state of the art on all evaluated datasets.

PDF Abstract ACL 2016 PDF ACL 2016 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering Children's Book Test AS reader (greedy) Accuracy-CN 67.5% # 6
Accuracy-NE 71% # 6
Question Answering Children's Book Test AS reader (avg) Accuracy-CN 68.9% # 5
Accuracy-NE 70.6% # 7
Question Answering CNN / Daily Mail AS Reader (ensemble model) CNN 75.4 # 6
Daily Mail 77.7 # 4
Question Answering CNN / Daily Mail AS Reader (single model) CNN 69.5 # 12
Daily Mail 73.9 # 7
Open-Domain Question Answering SearchQA ASR Unigram Acc 41.3 # 5
N-gram F1 22.8 # 5

Methods


No methods listed for this paper. Add relevant methods here