Natural Questions
71 papers with code • 2 benchmarks • 4 datasets
Libraries
Use these libraries to find Natural Questions models and implementationsMost implemented papers
Unsupervised Question Answering by Cloze Translation
We approach this problem by first learning to generate context, question and answer triples in an unsupervised manner, which we then use to synthesize Extractive QA training data automatically.
Span Selection Pre-training for Question Answering
BERT (Bidirectional Encoder Representations from Transformers) and related pre-trained Transformers have provided large gains across many language understanding tasks, achieving a new state-of-the-art (SOTA).
Document Modeling with Graph Attention Networks for Multi-grained Machine Reading Comprehension
Natural Questions is a new challenging machine reading comprehension benchmark with two-grained answers, which are a long answer (typically a paragraph) and a short answer (one or more entities inside the long answer).
C3VQG: Category Consistent Cyclic Visual Question Generation
In this paper, we try to exploit the different visual cues and concepts in an image to generate questions using a variational autoencoder (VAE) without ground-truth answers.
QED: A Framework and Dataset for Explanations in Question Answering
A question answering system that in addition to providing an answer provides an explanation of the reasoning that leads to that answer has potential advantages in terms of debuggability, extensibility and trust.
Generation-Augmented Retrieval for Open-domain Question Answering
We demonstrate that the generated contexts substantially enrich the semantics of the queries and GAR with sparse representations (BM25) achieves comparable or better performance than state-of-the-art dense retrieval methods such as DPR.
RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering
In open-domain question answering, dense passage retrieval has become a new paradigm to retrieve relevant passages for finding answers.
RECONSIDER: Re-Ranking using Span-Focused Cross-Attention for Open Domain Question Answering
State-of-the-art Machine Reading Comprehension (MRC) models for Open-domain Question Answering (QA) are typically trained for span selection using distantly supervised positive examples and heuristically retrieved negative examples.
Rider: Reader-Guided Passage Reranking for Open-Domain Question Answering
Current open-domain question answering systems often follow a Retriever-Reader architecture, where the retriever first retrieves relevant passages and the reader then reads the retrieved passages to form an answer.
Open Domain Question Answering over Tables via Dense Retrieval
Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages.