Question answering is the task of answering a question.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
SOTA for Common Sense Reasoning on SWAG
We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e. g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i. e., to model polysemy).
#2 best model for Coreference Resolution on CoNLL 2012
Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets.
SOTA for Language Modelling on Penn Treebank (Word Level) (using extra training data)
This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.
#106 best model for Question Answering on SQuAD1.1
Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions.
SOTA for Question Answering on WebQuestions
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent.
In recent years, deep neural models have been widely adopted for text matching tasks, such as question answering and information retrieval, showing improved performance as compared with previous methods.
Though designed for decaNLP, MQAN also achieves state of the art results on the WikiSQL semantic parsing task in the single-task setting.
DOMAIN ADAPTATION MACHINE TRANSLATION NAMED ENTITY RECOGNITION (NER) NATURAL LANGUAGE INFERENCE QUESTION ANSWERING RELATION EXTRACTION SEMANTIC PARSING SEMANTIC ROLE LABELING SENTIMENT ANALYSIS TEXT CLASSIFICATION TRANSFER LEARNING
Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations.
#7 best model for Machine Translation on WMT2014 English-German