Simple and Effective Multi-Paragraph Reading Comprehension

ACL 2018  ·  Christopher Clark, Matt Gardner ·

We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input. Our proposed solution trains models to produce well calibrated confidence scores for their results on individual paragraphs. We sample multiple paragraphs from the documents during training, and use a shared-normalization training objective that encourages the model to produce globally correct output. We combine this method with a state-of-the-art pipeline for training models on document QA data. Experiments demonstrate strong performance on several document QA datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion of TriviaQA, a large improvement from the 56.7 F1 of the previous best system.

PDF Abstract ACL 2018 PDF ACL 2018 Abstract

Datasets


Results from the Paper


Ranked #28 on Question Answering on TriviaQA (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Question Answering SQuAD1.1 BiDAF + Self Attention (single model) EM 72.139 # 144
F1 81.048 # 146
Question Answering TriviaQA S-Norm EM 66.37 # 28
F1 71.32 # 6

Methods


No methods listed for this paper. Add relevant methods here