Reading Comprehension
568 papers with code • 7 benchmarks • 95 datasets
Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document.
Some specific tasks of reading comprehension include multi-modal machine reading comprehension and textual machine reading comprehension, among others. In the literature, machine reading comprehension can be divide into four categories: cloze style, multiple choice, span prediction, and free-form answer. Read more about each category here.
Benchmark datasets used for testing a model's reading comprehension abilities include MovieQA, ReCoRD, and RACE, among others.
The Machine Reading group at UCL also provides an overview of reading comprehension tasks.
Figure source: A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and Benchmark Datasets
Libraries
Use these libraries to find Reading Comprehension models and implementationsSubtasks
- Machine Reading Comprehension
- Intent Recognition
- Implicit Relations
- LAMBADA
- LAMBADA
- Question Selection
- Multi-Hop Reading Comprehension
- Implicatures
- Logical Reasoning Reading Comprehension
- English Proverbs
- Fantasy Reasoning
- Figure Of Speech Detection
- Formal Fallacies Syllogisms Negation
- GRE Reading Comprehension
- Hyperbaton
- Movie Dialog Same Or Different
- Nonsense Words Grammar
- Phrase Relatedness
- RACE-h
- RACE-m
Latest papers with no code
Exploring Autonomous Agents through the Lens of Large Language Models: A Review
Large Language Models (LLMs) are transforming artificial intelligence, enabling autonomous agents to perform diverse tasks across various domains.
The Death of Feature Engineering? BERT with Linguistic Features on SQuAD 2.0
We conclude that the BERT base model will be improved by incorporating the features.
Exploring the Nexus of Large Language Models and Legal Systems: A Short Survey
With the advancement of Artificial Intelligence (AI) and Large Language Models (LLMs), there is a profound transformation occurring in the realm of natural language processing tasks within the legal domain.
Towards Human-Like Machine Comprehension: Few-Shot Relational Learning in Visually-Rich Documents
This approach aims to generate relation representations that are more aware of the spatial context and unseen relation in a manner similar to human perception.
MRC-based Nested Medical NER with Co-prediction and Adaptive Pre-training
In medical information extraction, medical Named Entity Recognition (NER) is indispensable, playing a crucial role in developing medical knowledge graphs, enhancing medical question-answering systems, and analyzing electronic medical records.
Knowledge Condensation and Reasoning for Knowledge-based VQA
We condense the retrieved knowledge passages from two perspectives.
CuentosIE: can a chatbot about "tales with a message" help to teach emotional intelligence?
In this article, we present CuentosIE (TalesEI: chatbot of tales with a message to develop Emotional Intelligence), an educational chatbot on emotions that also provides teachers and psychologists with a tool to monitor their students/patients through indicators and data compiled by CuentosIE.
Towards a Psychology of Machines: Large Language Models Predict Human Memory
Participants, both human and ChatGPT, were presented with pairs of sentences.
SaulLM-7B: A pioneering Large Language Model for Law
In this paper, we introduce SaulLM-7B, a large language model (LLM) tailored for the legal domain.
AceMap: Knowledge Discovery through Academic Graph
The exponential growth of scientific literature requires effective management and extraction of valuable insights.