Browse > Natural Language Processing > Reading Comprehension

Reading Comprehension

168 papers with code · Natural Language Processing

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document. The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Leaderboards

Greatest papers with code

Predicting Subjective Features from Questions on QA Websites using BERT

ICWR 2020 tensorflow/models

Community Question-Answering websites, such as StackOverflow and Quora, expect users to follow specific guidelines in order to maintain content quality.

COMMON SENSE REASONING COMMUNITY QUESTION ANSWERING QUESTION QUALITY ASSESSMENT READING COMPREHENSION

RoBERTa: A Robustly Optimized BERT Pretraining Approach

26 Jul 2019huggingface/transformers

Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging.

 SOTA for Question Answering on SQuAD2.0 dev (using extra training data)

LANGUAGE MODELLING LEXICAL SIMPLIFICATION NATURAL LANGUAGE INFERENCE QUESTION ANSWERING READING COMPREHENSION SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS

XLNet: Generalized Autoregressive Pretraining for Language Understanding

NeurIPS 2019 huggingface/transformers

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.

DOCUMENT RANKING LANGUAGE MODELLING NATURAL LANGUAGE INFERENCE QUESTION ANSWERING READING COMPREHENSION SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS TEXT CLASSIFICATION

Language Models are Unsupervised Multitask Learners

Preprint 2019 huggingface/transformers

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.

 SOTA for Language Modelling on Text8 (using extra training data)

COMMON SENSE REASONING DOCUMENT SUMMARIZATION LANGUAGE MODELLING MACHINE TRANSLATION QUESTION ANSWERING READING COMPREHENSION TEXT GENERATION

AllenNLP: A Deep Semantic Natural Language Processing Platform

WS 2018 allenai/allennlp

This paper describes AllenNLP, a platform for research on deep learning methods in natural language understanding.

READING COMPREHENSION SEMANTIC ROLE LABELING

Reading Wikipedia to Answer Open-Domain Questions

ACL 2017 facebookresearch/ParlAI

This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.

OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION

Embracing data abundance: BookTest Dataset for Reading Comprehension

4 Oct 2016facebookresearch/ParlAI

We show that training on the new data improves the accuracy of our Attention-Sum Reader model on the original CBT test data by a much larger margin than many recent attempts to improve the model architecture.

READING COMPREHENSION

Teaching Machines to Read and Comprehend

NeurIPS 2015 facebookresearch/ParlAI

Teaching machines to read natural language documents remains an elusive challenge.

READING COMPREHENSION

Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks

19 Feb 2015facebookresearch/ParlAI

One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent.

QUESTION ANSWERING READING COMPREHENSION

DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications

WS 2018 PaddlePaddle/models

Experiments show that human performance is well above current state-of-the-art baseline systems, leaving plenty of room for the community to make improvements.

MACHINE READING COMPREHENSION