Browse > Natural Language Processing > Question Answering

Question Answering

493 papers with code · Natural Language Processing

( Image credit: SQuAD )

Leaderboards

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

Greatest papers with code

Predicting Subjective Features from Questions on QA Websites using BERT

ICWR 2020 tensorflow/models

Community Question-Answering websites, such as StackOverflow and Quora, expect users to follow specific guidelines in order to maintain content quality.

COMMON SENSE REASONING COMMUNITY QUESTION ANSWERING QUESTION QUALITY ASSESSMENT READING COMPREHENSION

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

26 Sep 2019tensorflow/models

Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks.

LINGUISTIC ACCEPTABILITY NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SEMANTIC TEXTUAL SIMILARITY

BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension

29 Oct 2019huggingface/transformers

We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token.

#9 best model for Question Answering on SQuAD1.1 dev (F1 metric)

DENOISING MACHINE TRANSLATION NATURAL LANGUAGE INFERENCE QUESTION ANSWERING TEXT GENERATION

DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter

NeurIPS 2019 huggingface/transformers

As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging.

LANGUAGE MODELLING LINGUISTIC ACCEPTABILITY NATURAL LANGUAGE INFERENCE QUESTION ANSWERING SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS TRANSFER LEARNING

XLNet: Generalized Autoregressive Pretraining for Language Understanding

NeurIPS 2019 huggingface/transformers

With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling.

DOCUMENT RANKING LANGUAGE MODELLING NATURAL LANGUAGE INFERENCE QUESTION ANSWERING READING COMPREHENSION SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS TEXT CLASSIFICATION

Language Models are Unsupervised Multitask Learners

Preprint 2019 huggingface/transformers

Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.

 SOTA for Language Modelling on Text8 (using extra training data)

COMMON SENSE REASONING DOCUMENT SUMMARIZATION LANGUAGE MODELLING MACHINE TRANSLATION QUESTION ANSWERING READING COMPREHENSION TEXT GENERATION