Contextualized Word Representations for Reading Comprehension

NAACL 2018  ·  Shimi Salant, Jonathan Berant ·

Reading a document and extracting an answer to a question about its content has attracted substantial attention recently. While most work has focused on the interaction between the question and the document, in this work we evaluate the importance of context when the question and document are processed independently. We take a standard neural architecture for this task, and show that by providing rich contextualized word representations from a large pre-trained language model as well as allowing the model to choose between context-dependent and context-independent word representations, we can obtain dramatic improvements and reach performance comparable to state-of-the-art on the competitive SQuAD dataset.

PDF Abstract NAACL 2018 PDF NAACL 2018 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering SQuAD1.1 RaSoR + TR (single model) EM 75.789 # 118
F1 83.261 # 124
Question Answering SQuAD1.1 RaSoR + TR + LM (single model) EM 77.583 # 99
F1 84.163 # 115

Methods


No methods listed for this paper. Add relevant methods here