Natural Questions
71 papers with code • 2 benchmarks • 4 datasets
Libraries
Use these libraries to find Natural Questions models and implementationsMost implemented papers
End-to-End Training of Neural Retrievers for Open-Domain Question Answering
We also explore two approaches for end-to-end supervised training of the reader and retriever components in OpenQA models.
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.
Continual Learning with Knowledge Transfer for Sentiment Classification
In this setting, the CL system learns a sequence of SC tasks incrementally in a neural network, where each task builds a classifier to classify the sentiment of reviews of a particular product category or domain.
ST-MoE: Designing Stable and Transferable Sparse Expert Models
But advancing the state-of-the-art across a broad set of natural language tasks has been hindered by training instabilities and uncertain quality during fine-tuning.
Would You Ask it that Way? Measuring and Improving Question Naturalness for Knowledge Graph Question Answering
The construction of this test collection also sheds light on the challenges of constructing large-scale KGQA datasets with genuinely NL questions.
Dr.Spider: A Diagnostic Evaluation Benchmark towards Text-to-SQL Robustness
Neural text-to-SQL models have achieved remarkable performance in translating natural language questions into SQL queries.
Sampling From Large Graphs
Thus graph sampling is essential. The natural questions to ask are (a) which sampling method to use, (b) how small can the sample size be, and (c) how to scale up the measurements of the sample (e. g., the diameter), to get estimates for the large graph.
CoVeR: Learning Covariate-Specific Vector Representations with Tensor Decompositions
However, in addition to the text data itself, we often have additional covariates associated with individual corpus documents---e. g. the demographic of the author, time and venue of publication---and we would like the embedding to naturally capture this information.
Multimodal Differential Network for Visual Question Generation
Generating natural questions from an image is a semantic task that requires using visual and language modality to learn multimodal representations.
Natural Questions: a Benchmark for Question Answering Research
The public release consists of 307, 373 training examples with single annotations, 7, 830 examples with 5-way annotations for development data, and a further 7, 842 examples 5-way annotated sequestered as test data.