Machine Reading Comprehension
197 papers with code • 4 benchmarks • 41 datasets
Machine Reading Comprehension is one of the key problems in Natural Language Understanding, where the task is to read and comprehend a given text passage, and then answer questions based on it.
Libraries
Use these libraries to find Machine Reading Comprehension models and implementationsLatest papers
Instructive Dialogue Summarization with Query Aggregations
With the advancement of instruction-finetuned language models, we introduce instruction-tuning to dialogues to expand the capability set of dialogue summarization models.
Named Entity Recognition via Machine Reading Comprehension: A Multi-Task Learning Approach
In this paper, we propose to incorporate the label dependencies among entity types into a multi-task learning framework for better MRC-based NER.
The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants
We use this dataset to evaluate the capabilities of multilingual masked language models (MLMs) and large language models (LLMs).
Single-Sentence Reader: A Novel Approach for Addressing Answer Position Bias
Machine Reading Comprehension (MRC) models tend to take advantage of spurious correlations (also known as dataset bias or annotation artifacts in the research community).
Zero-shot Query Reformulation for Conversational Search
Nevertheless, existing zero-shot methods face three primary limitations: they are not universally applicable to all retrievers, their effectiveness lacks sufficient explainability, and they struggle to resolve common conversational ambiguities caused by omission.
IDOL: Indicator-oriented Logic Pre-training for Logical Reasoning
IDOL achieves state-of-the-art performance on ReClor and LogiQA, the two most representative benchmarks in logical reasoning MRC, and is proven to be capable of generalizing to different pre-trained models and other types of MRC benchmarks like RACE and SQuAD 2. 0 while keeping competitive general language understanding ability through testing on tasks in GLUE.
Sentence-level Event Detection without Triggers via Prompt Learning and Machine Reading Comprehension
The traditional way of sentence-level event detection involves two important subtasks: trigger identification and trigger classifications, where the identified event trigger words are used to classify event types from sentences.
Modeling Hierarchical Reasoning Chains by Linking Discourse Units and Key Phrases for Reading Comprehension
Machine reading comprehension (MRC) poses new challenges over logical reasoning, which aims to understand the implicit logical relations entailed in the given contexts and perform inference over them.
Bidirectional End-to-End Learning of Retriever-Reader Paradigm for Entity Linking
Entity Linking (EL) is a fundamental task for Information Extraction and Knowledge Graphs.
Bridging the Gap between Decision and Logits in Decision-based Knowledge Distillation for Pre-trained Language Models
By combining the theoretical and empirical estimations of the decision distributions together, the estimation of logits can be successfully reduced to a simple root-finding problem.