Natural Questions

71 papers with code • 2 benchmarks • 4 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Natural Questions models and implementations

Most implemented papers

Unsupervised Question Answering by Cloze Translation

facebookresearch/UnsupervisedQA ACL 2019

We approach this problem by first learning to generate context, question and answer triples in an unsupervised manner, which we then use to synthesize Extractive QA training data automatically.

Span Selection Pre-training for Question Answering

IBM/span-selection-pretraining ACL 2020

BERT (Bidirectional Encoder Representations from Transformers) and related pre-trained Transformers have provided large gains across many language understanding tasks, achieving a new state-of-the-art (SOTA).

Document Modeling with Graph Attention Networks for Multi-grained Machine Reading Comprehension

DancingSoul/NQ_BERT-DM ACL 2020

Natural Questions is a new challenging machine reading comprehension benchmark with two-grained answers, which are a long answer (typically a paragraph) and a short answer (one or more entities inside the long answer).

C3VQG: Category Consistent Cyclic Visual Question Generation

sarthak268/c3vqg-official 15 May 2020

In this paper, we try to exploit the different visual cues and concepts in an image to generate questions using a variational autoencoder (VAE) without ground-truth answers.

QED: A Framework and Dataset for Explanations in Question Answering

google-research-datasets/QED 8 Sep 2020

A question answering system that in addition to providing an answer provides an explanation of the reasoning that leads to that answer has potential advantages in terms of debuggability, extensibility and trust.

Generation-Augmented Retrieval for Open-domain Question Answering

morningmoni/GAR ACL 2021

We demonstrate that the generated contexts substantially enrich the semantics of the queries and GAR with sparse representations (BM25) achieves comparable or better performance than state-of-the-art dense retrieval methods such as DPR.

RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering

paddlepaddle/rocketqa NAACL 2021

In open-domain question answering, dense passage retrieval has become a new paradigm to retrieve relevant passages for finding answers.

RECONSIDER: Re-Ranking using Span-Focused Cross-Attention for Open Domain Question Answering

facebookresearch/reconsider 21 Oct 2020

State-of-the-art Machine Reading Comprehension (MRC) models for Open-domain Question Answering (QA) are typically trained for span selection using distantly supervised positive examples and heuristically retrieved negative examples.

Rider: Reader-Guided Passage Reranking for Open-Domain Question Answering

morningmoni/GAR 1 Jan 2021

Current open-domain question answering systems often follow a Retriever-Reader architecture, where the retriever first retrieves relevant passages and the reader then reads the retrieved passages to form an answer.

Open Domain Question Answering over Tables via Dense Retrieval

google-research/tapas NAACL 2021

Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages.