Machine Reading Comprehension

197 papers with code • 4 benchmarks • 41 datasets

Machine Reading Comprehension is one of the key problems in Natural Language Understanding, where the task is to read and comprehend a given text passage, and then answer questions based on it.

Source: Making Neural Machine Reading Comprehension Faster

Libraries

Use these libraries to find Machine Reading Comprehension models and implementations
2 papers
1,941
2 papers
1,100

Most implemented papers

DuReader_robust: A Chinese Dataset Towards Evaluating Robustness and Generalization of Machine Reading Comprehension in Real-World Applications

baidu/DuReader 23 Apr 2020

Machine reading comprehension (MRC) is a crucial task in natural language processing and has achieved remarkable advancements.

KLUE: Korean Language Understanding Evaluation

KLUE-benchmark/KLUE 20 May 2021

We introduce Korean Language Understanding Evaluation (KLUE) benchmark.

ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information

ShannonAI/ChineseBert ACL 2021

Recent pretraining models in Chinese neglect two important aspects specific to the Chinese language: glyph and pinyin, which carry significant syntax and semantic information for language understanding.

Text Understanding with the Attention Sum Reader Network

rkadlec/asreader ACL 2016

Several large cloze-style context-question-answer datasets have been introduced recently: the CNN and Daily Mail news data and the Children's Book Test.

Pre-Training with Whole Word Masking for Chinese BERT

ymcui/Chinese-BERT-wwm 19 Jun 2019

To demonstrate the effectiveness of these models, we create a series of Chinese pre-trained language models as our baselines, including BERT, RoBERTa, ELECTRA, RBT, etc.

MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension

jind11/MMM-MCQA 1 Oct 2019

Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language.

NumNet: Machine Reading Comprehension with Numerical Reasoning

ranqiu92/NumNet IJCNLP 2019

Numerical reasoning, such as addition, subtraction, sorting and counting is a critical skill in human's reading comprehension, which has not been well considered in existing machine reading comprehension (MRC) systems.

Dice Loss for Data-imbalanced NLP Tasks

ShannonAI/dice_loss_for_NLP ACL 2020

Many NLP tasks such as tagging and machine reading comprehension are faced with the severe data imbalance issue: negative examples significantly outnumber positive examples, and the huge number of background examples (or easy-negative examples) overwhelms the training.

Retrospective Reader for Machine Reading Comprehension

cooelf/AwesomeMRC 27 Jan 2020

Inspired by how humans solve reading comprehension questions, we proposed a retrospective reader (Retro-Reader) that integrates two stages of reading and verification strategies: 1) sketchy reading that briefly investigates the overall interactions of passage and question, and yield an initial judgment; 2) intensive reading that verifies the answer and gives the final prediction.

Asking Questions the Human Way: Scalable Question-Answer Generation from Text Corpus

bangliu/ACS-QG 27 Jan 2020

In this paper, we propose Answer-Clue-Style-aware Question Generation (ACS-QG), which aims at automatically generating high-quality and diverse question-answer pairs from unlabeled text corpus at scale by imitating the way a human asks questions.