Browse > Natural Language Processing > Reading Comprehension

Reading Comprehension

125 papers with code · Natural Language Processing

Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document. The Machine Reading group at UCL also provides an overview of reading comprehension tasks.

State-of-the-art leaderboards

Trend Dataset Best Method Paper title Paper Code Compare

Latest papers with code

AmazonQA: A Review-Based Question Answering Task

12 Aug 2019amazonqa/amazonqa

Observing that many questions can be answered based upon the available product reviews, we propose the task of review-based QA.

INFORMATION RETRIEVAL QUESTION ANSWERING READING COMPREHENSION

21
12 Aug 2019

Beyond English-only Reading Comprehension: Experiments in Zero-Shot Multilingual Transfer for Bulgarian

5 Aug 2019mhardalov/bg-reason-BERT

Recently, reading comprehension models achieved near-human performance on large-scale datasets such as SQuAD, CoQA, MS Macro, RACE, etc.

READING COMPREHENSION

0
05 Aug 2019

XQA: A Cross-lingual Open-domain Question Answering Dataset

ACL 2019 thunlp/XQA

Experimental results show that the multilingual BERT model achieves the best results in almost all target languages, while the performance of cross-lingual OpenQA is still much lower than that of English.

MACHINE TRANSLATION OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION

18
01 Jul 2019

Katecheo: A Portable and Modular System for Multi-Topic Question Answering

1 Jul 2019cvdigitalai/katecheo

We introduce a modular system that can be deployed on any Kubernetes cluster for question answering via REST API.

QUESTION ANSWERING READING COMPREHENSION

9
01 Jul 2019

XLNet: Generalized Autoregressive Pretraining for Language Understanding

19 Jun 2019huggingface/pytorch-pretrained-BERT

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.

DOCUMENT RANKING LANGUAGE MODELLING NATURAL LANGUAGE INFERENCE QUESTION ANSWERING READING COMPREHENSION SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS TEXT CLASSIFICATION

11,051
19 Jun 2019

Pre-Training with Whole Word Masking for Chinese BERT

19 Jun 2019ymcui/Chinese-BERT-wwm

In this technical report, we adapt whole word masking in Chinese text, that masking the whole word instead of masking Chinese characters, which could bring another challenge in Masked Language Model (MLM) pre-training task.

LANGUAGE MODELLING MACHINE READING COMPREHENSION NAMED ENTITY RECOGNITION NATURAL LANGUAGE INFERENCE SENTIMENT ANALYSIS

1,120
19 Jun 2019

Zero-Shot Entity Linking by Reading Entity Descriptions

ACL 2019 lajanugen/zeshel

First, we show that strong reading comprehension models pre-trained on large unlabeled data can be used to generalize to unseen entities.

ENTITY LINKING READING COMPREHENSION

8
18 Jun 2019

Augmenting Neural Networks with First-order Logic

ACL 2019 utahnlp/layer_augmentation

Today, the dominant paradigm for training neural networks involves minimizing task loss on a large dataset.

CHUNKING NATURAL LANGUAGE INFERENCE READING COMPREHENSION

17
14 Jun 2019

E3: Entailment-driven Extracting and Editing for Conversational Machine Reading

ACL 2019 vzhong/e3

Conversational machine reading systems help users answer high-level questions (e. g. determine if they qualify for particular government benefits) when they do not know the exact rules by which the determination is made(e. g. whether they need certain income levels or veteran status).

READING COMPREHENSION

29
12 Jun 2019