About

Question Answering is the task of answering questions (typically reading comprehension questions), but abstaining when presented with a question that cannot be answered based on the provided context ( Image credit: SQuAD )

Benchmarks

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

Libraries

Subtasks

Datasets

Greatest papers with code

ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

ICLR 2020 tensorflow/models

Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not.

4 LANGUAGE MODELLING NATURAL LANGUAGE UNDERSTANDING QUESTION ANSWERING

Learning a Natural Language Interface with Neural Programmer

28 Nov 2016tensorflow/models

The main experimental result in this paper is that a single Neural Programmer model achieves 34. 2% accuracy using only 10, 000 examples with weak supervision.

PROGRAM INDUCTION QUESTION ANSWERING

Talking-Heads Attention

5 Mar 2020tensorflow/models

We introduce "talking-heads attention" - a variation on multi-head attention which includes linearprojections across the attention-heads dimension, immediately before and after the softmax operation. While inserting only a small number of additional parameters and a moderate amount of additionalcomputation, talking-heads attention leads to better perplexities on masked language modeling tasks, aswell as better quality when transfer-learning to language comprehension and question answering tasks.

LANGUAGE MODELLING QUESTION ANSWERING TRANSFER LEARNING

Predicting Subjective Features of Questions of QA Websites using BERT

ICWR 2020 tensorflow/models

Community Question-Answering websites, such as StackOverflow and Quora, expect users to follow specific guidelines in order to maintain content quality.

COMMUNITY QUESTION ANSWERING

Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks

NeurIPS 2020 huggingface/transformers

Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks.

QUESTION ANSWERING TEXT GENERATION