Contextualized Word Representations for Reading Comprehension

NAACL 2018  ·  Shimi Salant, Jonathan Berant ·

Reading a document and extracting an answer to a question about its content has attracted substantial attention recently. While most work has focused on the interaction between the question and the document, in this work we evaluate the importance of context when the question and document are processed independently. We take a standard neural architecture for this task, and show that by providing rich contextualized word representations from a large pre-trained language model as well as allowing the model to choose between context-dependent and context-independent word representations, we can obtain dramatic improvements and reach performance comparable to state-of-the-art on the competitive SQuAD dataset.

PDF Abstract NAACL 2018 PDF NAACL 2018 Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering SQuAD1.1 RaSoR + TR (single model) EM 75.789 # 115
F1 83.261 # 118
Hardware Burden None # 1
Operations per network pass None # 1
Question Answering SQuAD1.1 RaSoR + TR + LM (single model) EM 77.583 # 96
F1 84.163 # 109
Hardware Burden None # 1
Operations per network pass None # 1


No methods listed for this paper. Add relevant methods here