Open-domain question answering is the task of question answering on open-domain datasets such as Wikipedia.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method.
Ranked #6 on Question Answering on TriviaQA (F1 metric)
Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences.
Ranked #2 on Open-Domain Question Answering on SearchQA
We introduce an approach for open-domain question answering (QA) that retrieves and reads a passage graph, where vertices are passages of text and edges represent relationships that are derived from an external knowledge base or co-occurrence in the same article.
This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.
Ranked #1 on Open-Domain Question Answering on SQuAD1.1
Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing.
Ranked #1 on Open-Domain Question Answering on DuReader
CHINESE NAMED ENTITY RECOGNITION CHINESE READING COMPREHENSION CHINESE SENTENCE PAIR CLASSIFICATION CHINESE SENTIMENT ANALYSIS LINGUISTIC ACCEPTABILITY MULTI-TASK LEARNING NATURAL LANGUAGE INFERENCE OPEN-DOMAIN QUESTION ANSWERING SEMANTIC TEXTUAL SIMILARITY SENTIMENT ANALYSIS
We also explore two approaches for end-to-end supervised training of the reader and retriever components in OpenQA models.
Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering.
Ranked #8 on Question Answering on Natural Questions (short)
Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query.
Ranked #4 on Question Answering on CNN / Daily Mail
Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length.
We show for the first time that it is possible to jointly learn the retriever and reader from question-answer string pairs and without any IR system.