Browse > Natural Language Processing > Reading Comprehension

Reading Comprehension

126 papers with code · Natural Language Processing

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document. The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

State-of-the-art leaderboards

Trend Dataset Best Method Paper title Paper Code Compare

Greatest papers with code

XLNet: Generalized Autoregressive Pretraining for Language Understanding

19 Jun 2019huggingface/pytorch-pretrained-BERT

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.

DOCUMENT RANKING LANGUAGE MODELLING NATURAL LANGUAGE INFERENCE QUESTION ANSWERING READING COMPREHENSION SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS TEXT CLASSIFICATION

Language Models are Unsupervised Multitask Learners

Preprint 2019 huggingface/pytorch-pretrained-BERT

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.

 SOTA for Language Modelling on Text8 (using extra training data)

COMMON SENSE REASONING DOCUMENT SUMMARIZATION LANGUAGE MODELLING MACHINE TRANSLATION QUESTION ANSWERING READING COMPREHENSION

AllenNLP: A Deep Semantic Natural Language Processing Platform

WS 2018 allenai/allennlp

This paper describes AllenNLP, a platform for research on deep learning methods in natural language understanding.

READING COMPREHENSION SEMANTIC ROLE LABELING

Reading Wikipedia to Answer Open-Domain Questions

ACL 2017 facebookresearch/ParlAI

This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.

OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION

Embracing data abundance: BookTest Dataset for Reading Comprehension

4 Oct 2016facebookresearch/ParlAI

We show that training on the new data improves the accuracy of our Attention-Sum Reader model on the original CBT test data by a much larger margin than many recent attempts to improve the model architecture.

READING COMPREHENSION

Teaching Machines to Read and Comprehend

NeurIPS 2015 facebookresearch/ParlAI

Teaching machines to read natural language documents remains an elusive challenge.

READING COMPREHENSION

Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks

19 Feb 2015facebookresearch/ParlAI

One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent.

QUESTION ANSWERING READING COMPREHENSION

DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications

WS 2018 PaddlePaddle/models

Experiments show that human performance is well above current state-of-the-art baseline systems, leaving plenty of room for the community to make improvements.

MACHINE READING COMPREHENSION

Bidirectional Attention Flow for Machine Comprehension

5 Nov 2016allenai/bi-att-flow

Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query.

OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION