Question Answering

2861 papers with code • 143 benchmarks • 360 datasets

Question Answering is the task of answering questions (typically reading comprehension questions), but abstaining when presented with a question that cannot be answered based on the provided context.

Question answering can be segmented into domain-specific tasks like community question answering and knowledge-base question answering. Popular benchmark datasets for evaluation question answering systems include SQuAD, HotPotQA, bAbI, TriviaQA, WikiQA, and many others. Models for question answering are typically evaluated on metrics like EM and F1. Some recent top performing models are T5 and XLNet.

( Image credit: SQuAD )

Libraries

Use these libraries to find Question Answering models and implementations
29 papers
124,527
5 papers
2,548
See all 11 libraries.

Most implemented papers

LLaMA: Open and Efficient Foundation Language Models

facebookresearch/llama arXiv 2023

We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters.

DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter

huggingface/transformers NeurIPS 2019

As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging.

Distributed Representations of Sentences and Documents

inejc/paragraph-vectors 16 May 2014

Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models.

Bidirectional Attention Flow for Machine Comprehension

allenai/bi-att-flow 5 Nov 2016

Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query.

Large Batch Optimization for Deep Learning: Training BERT in 76 minutes

tensorflow/addons ICLR 2020

In this paper, we first study a principled layerwise adaptation strategy to accelerate training of deep neural networks using large mini-batches.

XLNet: Generalized Autoregressive Pretraining for Language Understanding

zihangdai/xlnet NeurIPS 2019

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.

Longformer: The Long-Document Transformer

allenai/longformer 10 Apr 2020

To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer.

Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks

facebook/bAbI-tasks 19 Feb 2015

One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent.

A simple neural network module for relational reasoning

kimhc6028/relational-networks NeurIPS 2017

Relational reasoning is a central component of generally intelligent behavior, but has proven difficult for neural networks to learn.

Pay Attention to MLPs

labmlai/annotated_deep_learning_paper_implementations NeurIPS 2021

Transformers have become one of the most important architectural innovations in deep learning and have enabled many breakthroughs over the past few years.