Browse > Natural Language Processing > Question Answering

Question Answering

409 papers with code · Natural Language Processing

Leaderboards

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

Greatest papers with code

RoBERTa: A Robustly Optimized BERT Pretraining Approach

26 Jul 2019huggingface/pytorch-transformers

Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging.

 SOTA for Question Answering on SQuAD2.0 dev (using extra training data)

LANGUAGE MODELLING LEXICAL SIMPLIFICATION NATURAL LANGUAGE INFERENCE QUESTION ANSWERING READING COMPREHENSION SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS

Language Models are Unsupervised Multitask Learners

Preprint 2019 huggingface/pytorch-transformers

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.

 SOTA for Language Modelling on Text8 (using extra training data)

COMMON SENSE REASONING DOCUMENT SUMMARIZATION LANGUAGE MODELLING MACHINE TRANSLATION QUESTION ANSWERING READING COMPREHENSION TEXT GENERATION

DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter

2 Oct 2019huggingface/transformers

As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging.

LANGUAGE MODELLING LINGUISTIC ACCEPTABILITY NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS TRANSFER LEARNING

XLNet: Generalized Autoregressive Pretraining for Language Understanding

19 Jun 2019huggingface/transformers

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.

DOCUMENT RANKING LANGUAGE MODELLING NATURAL LANGUAGE INFERENCE QUESTION ANSWERING READING COMPREHENSION SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS TEXT CLASSIFICATION

Deep contextualized word representations

NAACL 2018 zalandoresearch/flair

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).

CITATION INTENT CLASSIFICATION COREFERENCE RESOLUTION LANGUAGE MODELLING NAMED ENTITY RECOGNITION NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SEMANTIC ROLE LABELING SENTIMENT ANALYSIS

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

26 Sep 2019google-research/google-research

Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks.

LINGUISTIC ACCEPTABILITY NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SEMANTIC TEXTUAL SIMILARITY

Reading Wikipedia to Answer Open-Domain Questions

ACL 2017 facebookresearch/ParlAI

This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.

OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION

Key-Value Memory Networks for Directly Reading Documents

EMNLP 2016 facebookresearch/ParlAI

Directly reading documents and being able to answer questions from them is an unsolved challenge.

QUESTION ANSWERING

Large-scale Simple Question Answering with Memory Networks

5 Jun 2015facebookresearch/ParlAI

Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions.

QUESTION ANSWERING TRANSFER LEARNING