Browse > Natural Language Processing > Question Answering

Question Answering

351 papers with code · Natural Language Processing

State-of-the-art leaderboards

Trend Dataset Best Method Paper title Paper Code Compare

Greatest papers with code

XLNet: Generalized Autoregressive Pretraining for Language Understanding

19 Jun 2019huggingface/pytorch-pretrained-BERT

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.

DOCUMENT RANKING LANGUAGE MODELLING NATURAL LANGUAGE INFERENCE QUESTION ANSWERING READING COMPREHENSION SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS TEXT CLASSIFICATION

Language Models are Unsupervised Multitask Learners

Preprint 2019 huggingface/pytorch-pretrained-BERT

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.

 SOTA for Language Modelling on Text8 (using extra training data)

COMMON SENSE REASONING DOCUMENT SUMMARIZATION LANGUAGE MODELLING MACHINE TRANSLATION QUESTION ANSWERING READING COMPREHENSION

Deep contextualized word representations

NAACL 2018 zalandoresearch/flair

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).

CITATION INTENT CLASSIFICATION COREFERENCE RESOLUTION LANGUAGE MODELLING NAMED ENTITY RECOGNITION NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SEMANTIC ROLE LABELING SENTIMENT ANALYSIS

Reading Wikipedia to Answer Open-Domain Questions

ACL 2017 facebookresearch/ParlAI

This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.

OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION

Key-Value Memory Networks for Directly Reading Documents

EMNLP 2016 facebookresearch/ParlAI

Directly reading documents and being able to answer questions from them is an unsolved challenge.

QUESTION ANSWERING

Large-scale Simple Question Answering with Memory Networks

5 Jun 2015facebookresearch/ParlAI

Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions.

QUESTION ANSWERING TRANSFER LEARNING

Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks

19 Feb 2015facebookresearch/ParlAI

One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent.

QUESTION ANSWERING READING COMPREHENSION

Simple Recurrent Units for Highly Parallelizable Recurrence

EMNLP 2018 aymericdamien/TopDeepLearning

Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations.

MACHINE TRANSLATION QUESTION ANSWERING TEXT CLASSIFICATION