Machine Reading Comprehension

197 papers with code • 4 benchmarks • 41 datasets

Machine Reading Comprehension is one of the key problems in Natural Language Understanding, where the task is to read and comprehend a given text passage, and then answer questions based on it.

Source: Making Neural Machine Reading Comprehension Faster

Libraries

Use these libraries to find Machine Reading Comprehension models and implementations
2 papers
1,952
2 papers
1,109

Most implemented papers

Teaching Machine Comprehension with Compositional Explanations

INK-USC/nl-explanation Findings of the Association for Computational Linguistics 2020

Advances in machine reading comprehension (MRC) rely heavily on the collection of large scale human-annotated examples in the form of (question, paragraph, answer) triples.

LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning

lgw863/LogiQA-dataset 16 Jul 2020

Machine reading is a fundamental task for testing the capability of natural language understanding, which is closely related to human cognition in many aspects.

ECONET: Effective Continual Pretraining of Language Models for Event Temporal Reasoning

pluslabnlp/econet EMNLP 2021

While pre-trained language models (PTLMs) have achieved noticeable success on many NLP tasks, they still struggle for tasks that require event temporal reasoning, which is essential for event-centric applications.

Bidirectional Machine Reading Comprehension for Aspect Sentiment Triplet Extraction

NKU-IIPLab/BMRC 13 Mar 2021

Aspect sentiment triplet extraction (ASTE), which aims to identify aspects from review sentences along with their corresponding opinion expressions and sentiments, is an emerging task in fine-grained opinion mining.

Dependency Parsing as MRC-based Span-Span Prediction

ShannonAI/mrc-for-dependency-parsing ACL 2022

The proposed method has the following merits: (1) it addresses the fundamental problem that edges in a dependency tree should be constructed between subtrees; (2) the MRC framework allows the method to retrieve missing spans in the span proposal stage, which leads to higher recall for eligible spans.

Fact-driven Logical Reasoning for Machine Reading Comprehension

ozyyshr/DGM NeurIPS 2021

Recent years have witnessed an increasing interest in training machines with reasoning ability, which deeply relies on accurately and clearly presented clue forms.

GENIUS: Sketch-based Language Model Pre-training via Extreme and Selective Masking for Text Generation and Augmentation

beyondguo/genius 18 Nov 2022

We introduce GENIUS: a conditional text generation model using sketches as input, which can fill in the missing contexts for a given sketch (key information consisting of textual spans, phrases, or words, concatenated by mask tokens).

Rethinking Label Smoothing on Multi-hop Question Answering

yinzhangyue/smoothing-r3 19 Dec 2022

Multi-Hop Question Answering (MHQA) is a significant area in question answering, requiring multiple reasoning components, including document retrieval, supporting sentence prediction, and answer span extraction.

Interpreting Themes from Educational Stories

ritual-uh/edustory 8 Apr 2024

Reading comprehension continues to be a crucial research focus in the NLP community.

Building Large Machine Reading-Comprehension Datasets using Paragraph Vectors

google/mcafp 13 Dec 2016

We present a dual contribution to the task of machine reading-comprehension: a technique for creating large-sized machine-comprehension (MC) datasets using paragraph-vector models; and a novel, hybrid neural-network architecture that combines the representation power of recurrent neural networks with the discriminative power of fully-connected multi-layered networks.