Machine Reading Comprehension
197 papers with code • 4 benchmarks • 41 datasets
Machine Reading Comprehension is one of the key problems in Natural Language Understanding, where the task is to read and comprehend a given text passage, and then answer questions based on it.
Libraries
Use these libraries to find Machine Reading Comprehension models and implementationsMost implemented papers
DuReader_robust: A Chinese Dataset Towards Evaluating Robustness and Generalization of Machine Reading Comprehension in Real-World Applications
Machine reading comprehension (MRC) is a crucial task in natural language processing and has achieved remarkable advancements.
KLUE: Korean Language Understanding Evaluation
We introduce Korean Language Understanding Evaluation (KLUE) benchmark.
ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information
Recent pretraining models in Chinese neglect two important aspects specific to the Chinese language: glyph and pinyin, which carry significant syntax and semantic information for language understanding.
Text Understanding with the Attention Sum Reader Network
Several large cloze-style context-question-answer datasets have been introduced recently: the CNN and Daily Mail news data and the Children's Book Test.
Pre-Training with Whole Word Masking for Chinese BERT
To demonstrate the effectiveness of these models, we create a series of Chinese pre-trained language models as our baselines, including BERT, RoBERTa, ELECTRA, RBT, etc.
MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension
Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language.
NumNet: Machine Reading Comprehension with Numerical Reasoning
Numerical reasoning, such as addition, subtraction, sorting and counting is a critical skill in human's reading comprehension, which has not been well considered in existing machine reading comprehension (MRC) systems.
Dice Loss for Data-imbalanced NLP Tasks
Many NLP tasks such as tagging and machine reading comprehension are faced with the severe data imbalance issue: negative examples significantly outnumber positive examples, and the huge number of background examples (or easy-negative examples) overwhelms the training.
Retrospective Reader for Machine Reading Comprehension
Inspired by how humans solve reading comprehension questions, we proposed a retrospective reader (Retro-Reader) that integrates two stages of reading and verification strategies: 1) sketchy reading that briefly investigates the overall interactions of passage and question, and yield an initial judgment; 2) intensive reading that verifies the answer and gives the final prediction.
Asking Questions the Human Way: Scalable Question-Answer Generation from Text Corpus
In this paper, we propose Answer-Clue-Style-aware Question Generation (ACS-QG), which aims at automatically generating high-quality and diverse question-answer pairs from unlabeled text corpus at scale by imitating the way a human asks questions.