Natural Questions
80 papers with code • 2 benchmarks • 4 datasets
Libraries
Use these libraries to find Natural Questions models and implementationsMost implemented papers
Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering
Generative models for open domain question answering have proven to be competitive, without resorting to external knowledge.
Knowledge Guided Text Retrieval and Reading for Open Domain Question Answering
We introduce an approach for open-domain question answering (QA) that retrieves and reads a passage graph, where vertices are passages of text and edges represent relationships that are derived from an external knowledge base or co-occurrence in the same article.
Relevance-guided Supervision for OpenQA with ColBERT
In much recent work, the retriever is a learned component that uses coarse-grained vector representations of questions and passages.
RealFormer: Transformer Likes Residual Attention
Transformer is the backbone of modern NLP models.
A BERT Baseline for the Natural Questions
This technical note describes a new baseline for the Natural Questions.
Event Extraction by Answering (Almost) Natural Questions
The problem of event extraction requires detecting the event trigger and extracting its corresponding arguments.
AutoQA: From Databases To QA Semantic Parsers With Only Synthetic Training Data
To demonstrate the generality of AutoQA, we also apply it to the Overnight dataset.
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.
TempoQR: Temporal Question Reasoning over Knowledge Graphs
The first computes a textual representation of a given question, the second combines it with the entity embeddings for entities involved in the question, and the third generates question-specific time embeddings.
ST-MoE: Designing Stable and Transferable Sparse Expert Models
But advancing the state-of-the-art across a broad set of natural language tasks has been hindered by training instabilities and uncertain quality during fine-tuning.