Reading Comprehension

318 papers with code • 6 benchmarks • 89 datasets

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document. The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

Greatest papers with code

Robust Reading Comprehension with Linguistic Constraints via Posterior Regularization

huggingface/pytorch-pretrained-BERT 16 Nov 2019

In this paper, we address the over-confidence issue and the over-sensitivity issue existing in current RC models simultaneously with the help of external linguistic knowledge.

Machine Reading Comprehension

mT5: A massively multilingual pre-trained text-to-text transformer

huggingface/transformers NAACL 2021

The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks.

Common Sense Reasoning Natural Language Inference +2

Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing

huggingface/transformers NeurIPS 2020

With the success of language pretraining, it is highly desirable to develop more efficient architectures of good scalability that can exploit the abundant unlabeled data at a lower cost.

Reading Comprehension Text Classification

DeBERTa: Decoding-enhanced BERT with Disentangled Attention

huggingface/transformers ICLR 2021

Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks.

Common Sense Reasoning Coreference Resolution +9

Knowledge Guided Text Retrieval and Reading for Open Domain Question Answering

huggingface/transformers 10 Nov 2019

We introduce an approach for open-domain question answering (QA) that retrieves and reads a passage graph, where vertices are passages of text and edges represent relationships that are derived from an external knowledge base or co-occurrence in the same article.

Open-Domain Question Answering Reading Comprehension +1

RoBERTa: A Robustly Optimized BERT Pretraining Approach

huggingface/transformers 26 Jul 2019

Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging.

Common Sense Reasoning Language Modelling +6

XLNet: Generalized Autoregressive Pretraining for Language Understanding

huggingface/transformers NeurIPS 2019

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.

Document Ranking Humor Detection +7

Language Models are Unsupervised Multitask Learners

huggingface/transformers Preprint 2019

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.

 Ranked #1 on Language Modelling on enwik8 (using extra training data)

Common Sense Reasoning Data-to-Text Generation +6

AllenNLP: A Deep Semantic Natural Language Processing Platform

allenai/allennlp WS 2018

This paper describes AllenNLP, a platform for research on deep learning methods in natural language understanding.

Natural Language Understanding Platform +2

Reading Wikipedia to Answer Open-Domain Questions

facebookresearch/ParlAI ACL 2017

This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.

Open-Domain Question Answering Reading Comprehension