Natural Questions

80 papers with code • 2 benchmarks • 4 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Natural Questions models and implementations

Most implemented papers

Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering

jhyuklee/DensePhrases EACL 2021

Generative models for open domain question answering have proven to be competitive, without resorting to external knowledge.

Knowledge Guided Text Retrieval and Reading for Open Domain Question Answering

huggingface/transformers 10 Nov 2019

We introduce an approach for open-domain question answering (QA) that retrieves and reads a passage graph, where vertices are passages of text and edges represent relationships that are derived from an external knowledge base or co-occurrence in the same article.

Relevance-guided Supervision for OpenQA with ColBERT

stanford-futuredata/ColBERT 1 Jul 2020

In much recent work, the retriever is a learned component that uses coarse-grained vector representations of questions and passages.

A BERT Baseline for the Natural Questions

google-research/language 24 Jan 2019

This technical note describes a new baseline for the Natural Questions.

Event Extraction by Answering (Almost) Natural Questions

xinyadu/eeqa EMNLP 2020

The problem of event extraction requires detecting the event trigger and extracting its corresponding arguments.

AutoQA: From Databases To QA Semantic Parsers With Only Synthetic Training Data

stanford-oval/genie-toolkit EMNLP 2020

To demonstrate the generality of AutoQA, we also apply it to the Overnight dataset.

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

allenai/dolma NA 2021

Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.

TempoQR: Temporal Question Reasoning over Knowledge Graphs

cmavro/tempoqr 10 Dec 2021

The first computes a textual representation of a given question, the second combines it with the entity embeddings for entities involved in the question, and the third generates question-specific time embeddings.

ST-MoE: Designing Stable and Transferable Sparse Expert Models

tensorflow/mesh 17 Feb 2022

But advancing the state-of-the-art across a broad set of natural language tasks has been hindered by training instabilities and uncertain quality during fine-tuning.