Browse > Natural Language Processing > Reading Comprehension

Reading Comprehension

82 papers with code · Natural Language Processing

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document. The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

AllenNLP: A Deep Semantic Natural Language Processing Platform

WS 2018 allenai/allennlp

This paper describes AllenNLP, a platform for research on deep learning methods in natural language understanding. AllenNLP is designed to support researchers who want to build novel language understanding models quickly and easily.

READING COMPREHENSION SEMANTIC ROLE LABELING

Reading Wikipedia to Answer Open-Domain Questions

ACL 2017 facebookresearch/ParlAI

This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article. This task of machine reading at scale combines the challenges of document retrieval (finding the relevant articles) with that of machine comprehension of text (identifying the answer spans from those articles).

OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION

Embracing data abundance: BookTest Dataset for Reading Comprehension

4 Oct 2016facebookresearch/ParlAI

There is a practically unlimited amount of natural language data available. We show that training on the new data improves the accuracy of our Attention-Sum Reader model on the original CBT test data by a much larger margin than many recent attempts to improve the model architecture.

READING COMPREHENSION

Teaching Machines to Read and Comprehend

NeurIPS 2015 facebookresearch/ParlAI

Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions posed on the contents of documents that they have seen, but until now large scale training and test datasets have been missing for this type of evaluation.

READING COMPREHENSION

Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks

19 Feb 2015facebookresearch/ParlAI

One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering.

QUESTION ANSWERING READING COMPREHENSION

Language Models are Unsupervised Multitask Learners

Preprint 2019 openai/gpt-2

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText.

COMMON SENSE REASONING DOCUMENT SUMMARIZATION LANGUAGE MODELLING MACHINE TRANSLATION QUESTION ANSWERING READING COMPREHENSION

Bidirectional Attention Flow for Machine Comprehension

5 Nov 2016allenai/bi-att-flow

Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC.

OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION

QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension

ICLR 2018 NLPLearn/QANet

Current end-to-end machine reading and question answering (Q\&A) models are primarily based on recurrent neural networks (RNNs) with attention. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference, while achieving equivalent accuracy to recurrent models.

MACHINE TRANSLATION QUESTION ANSWERING READING COMPREHENSION

Machine Comprehension Using Match-LSTM and Answer Pointer

29 Aug 2016baidu/DuReader

Machine comprehension of text is an important problem in natural language processing. We propose two ways of using Pointer Net for our task.

NATURAL LANGUAGE INFERENCE QUESTION ANSWERING READING COMPREHENSION

SQuAD: 100,000+ Questions for Machine Comprehension of Text

EMNLP 2016 HKUST-KnowComp/R-Net

We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees.

QUESTION ANSWERING READING COMPREHENSION