Logical Reasoning Reading Comprehension

4 papers with code • 0 benchmarks • 1 datasets

Logical reasoning reading comprehension is a task proposed by the paper ReClor (ICLR 2020), which is to evaluate the logical reasoning ability of machine reading comprehension models. ReClor is the first dataset for logical reasoning reading comprehension.

Datasets


Most implemented papers

ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning

yuweihao/reclor ICLR 2020

Empirical results show that state-of-the-art models have an outstanding ability to capture biases contained in the dataset with high accuracy on EASY set.

DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models

vamsi995/paraphrase-generator *SEM (NAACL) 2022

In this paper, we present and implement a multi-dimensional, modular framework for performing deep argument analysis (DeepA2) using current pre-trained language models (PTLMs).

Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning

strong-ai-lab/logical-equivalence-driven-amr-data-augmentation-for-representation-learning 21 May 2023

Combining large language models with logical reasoning enhances their capacity to address problems in a robust and reliable manner.

LogiQA 2.0—An Improved Dataset for Logical Reasoning in Natural Language Understanding

2024-MindSpore-1/Code2 journal 2023

The dataset is an amendment and re-annotation of LogiQA in 2020, a large-scale logical reasoning reading comprehension dataset adapted from the Chinese Civil Service Examination.