Open-Domain Question Answering

91 papers with code • 11 benchmarks • 17 datasets

Open-domain question answering is the task of question answering on open-domain datasets such as Wikipedia.

Greatest papers with code

Dense Passage Retrieval for Open-Domain Question Answering

huggingface/transformers EMNLP 2020

Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method.

Open-Domain Question Answering Passage Retrieval

Reformer: The Efficient Transformer

huggingface/transformers ICLR 2020

Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences.

Image Generation Language Modelling +1

Knowledge Guided Text Retrieval and Reading for Open Domain Question Answering

huggingface/transformers 10 Nov 2019

We introduce an approach for open-domain question answering (QA) that retrieves and reads a passage graph, where vertices are passages of text and edges represent relationships that are derived from an external knowledge base or co-occurrence in the same article.

Open-Domain Question Answering Reading Comprehension +1

Fine-tune the Entire RAG Architecture (including DPR retriever) for Question-Answering

huggingface/transformers 22 Jun 2021

In this paper, we illustrate how to fine-tune the entire Retrieval Augment Generation (RAG) architecture in an end-to-end manner.

Open-Domain Question Answering

Reading Wikipedia to Answer Open-Domain Questions

facebookresearch/ParlAI ACL 2017

This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article.

Open-Domain Question Answering Reading Comprehension

ERNIE 2.0: A Continual Pre-training Framework for Language Understanding

PaddlePaddle/ERNIE 29 Jul 2019

Recently, pre-trained models have achieved state-of-the-art results in various language understanding tasks, which indicates that pre-training on large-scale corpora may play a crucial role in natural language processing.

Chinese Named Entity Recognition Chinese Reading Comprehension +9

REALM: Retrieval-Augmented Language Model Pre-Training

deepset-ai/haystack 10 Feb 2020

Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering.

Fine-tuning Language Modelling +1

End-to-End Training of Neural Retrievers for Open-Domain Question Answering

NVIDIA/Megatron-LM ACL 2021

We also explore two approaches for end-to-end supervised training of the reader and retriever components in OpenQA models.

Open-Domain Question Answering Unsupervised Pre-training

Bidirectional Attention Flow for Machine Comprehension

allenai/bi-att-flow 5 Nov 2016

Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query.

Cloze Test Open-Domain Question Answering +1

Learning Dense Representations of Phrases at Scale

princeton-nlp/SimCSE ACL 2021

Open-domain question answering can be reformulated as a phrase retrieval problem, without the need for processing documents on-demand during inference (Seo et al., 2019).

Fine-tuning Open-Domain Question Answering +4