Question Answering
2334 papers with code • 144 benchmarks • 351 datasets
Question Answering is the task of answering questions (typically reading comprehension questions), but abstaining when presented with a question that cannot be answered based on the provided context.
Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.
( Image credit: SQuAD )
Libraries
Use these libraries to find Question Answering models and implementationsSubtasks
-
Open-Ended Question Answering
-
Open-Domain Question Answering
-
Conversational Question Answering
-
Answer Selection
-
Answer Selection
-
Knowledge Base Question Answering
-
Community Question Answering
-
Multiple Choice Question Answering (MCQA)
-
Long Form Question Answering
-
Generative Question Answering
-
Cross-Lingual Question Answering
-
Science Question Answering
-
Zero-Shot Video Question Answer
-
Mathematical Question Answering
-
Temporal/Casual QA
-
Logical Reasoning Question Answering
-
Multilingual Machine Comprehension in English Hindi
-
True or False Question Answering
-
Question Quality Assessment
Most implemented papers
Attention Is All You Need
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
Graph Attention Networks
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations.
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging.
Deep contextualized word representations
We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks.
End-To-End Memory Networks
For the former our approach is competitive with Memory Networks, but with less supervision.
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP).
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token.
Language Models are Few-Shot Learners
By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.