Browse > Natural Language Processing > Question Answering > Open-Domain Question Answering

Open-Domain Question Answering

14 papers with code · Natural Language Processing
Subtask of Question Answering

Open-domain question answering is the task of question answering on open-domain datasets such as Wikipedia.

State-of-the-art leaderboards

Greatest papers with code

Reading Wikipedia to Answer Open-Domain Questions

ACL 2017 facebookresearch/ParlAI

This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article. This task of machine reading at scale combines the challenges of document retrieval (finding the relevant articles) with that of machine comprehension of text (identifying the answer spans from those articles).

OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION

Bidirectional Attention Flow for Machine Comprehension

5 Nov 2016allenai/bi-att-flow

Machine comprehension (MC), answering a query about a given context paragraph, requires modeling complex interactions between the context and the query. Recently, attention mechanisms have been successfully extended to MC.

OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION

Deep Learning for Answer Sentence Selection

4 Dec 2014brmson/dataset-sts

Answer sentence selection is the task of identifying sentences that contain the answer to a given question. This is an important problem in its own right as well as in the larger context of open domain question answering.

OPEN-DOMAIN QUESTION ANSWERING

Gated-Attention Readers for Text Comprehension

ACL 2017 bdhingra/ga-reader

In this paper we study the problem of answering cloze-style questions over documents. Our model, the Gated-Attention (GA) Reader, integrates a multi-hop architecture with a novel attention mechanism, which is based on multiplicative interactions between the query embedding and the intermediate states of a recurrent neural network document reader.

ANSWER SELECTION OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION

SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine

18 Apr 2017nyu-dl/SearchQA

We publicly release a new large-scale dataset, called SearchQA, for machine comprehension, or question-answering. Unlike recently released datasets, such as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to reflect a full pipeline of general question-answering.

OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION

Text Understanding with the Attention Sum Reader Network

ACL 2016 rkadlec/asreader

Several large cloze-style context-question-answer datasets have been introduced recently: the CNN and Daily Mail news data and the Children's Book Test. Thanks to the size of these datasets, the associated text comprehension task is well suited for deep-learning techniques that currently seem to outperform all alternative approaches.

OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION

Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text

EMNLP 2018 OceanskySun/GraftNet

In this paper we look at a more practical setting, namely QA over the combination of a KB and entity-linked text, which is appropriate when an incomplete KB is available with a large text corpus. We show that GRAFT-Net is competitive with the state-of-the-art when tested using either KBs or text alone, and vastly outperforms existing methods in the combined setting.

GRAPH REPRESENTATION LEARNING OPEN-DOMAIN QUESTION ANSWERING

A Question-Focused Multi-Factor Attention Network for Question Answering

25 Jan 2018nusnlp/amanda

Neural network models recently proposed for question answering (QA) primarily focus on capturing the passage-question relation. However, they have minimal capability to link relevant facts distributed across multiple sentences which is crucial in achieving deeper understanding, such as performing multi-sentence reasoning, co-reference resolution, etc.

OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION

Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering

ICLR 2018 shuohangwang/mprc

A popular recent approach to answering open-domain questions is to first search for question-related passages and then apply reading comprehension models to extract answers. We propose two methods, namely, strength-based re-ranking and coverage-based re-ranking, to make use of the aggregated evidence from different passages to better determine the answer.

OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION

R$^3$: Reinforced Reader-Ranker for Open-Domain Question Answering

31 Aug 2017shuohangwang/mprc

First, we propose a new pipeline for open-domain QA with a Ranker component, which learns to rank retrieved passages in terms of likelihood of generating the ground-truth answer to a given question. Second, we propose a novel method that jointly trains the Ranker along with an answer-generation Reader model, based on reinforcement learning.

OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION