Known Unknowns

7 papers with code • 0 benchmarks • 0 datasets

Language models have a tendency to generate text containing false statements that are often referred to as "Hallucinations." The primary purpose of this task is to test for this failure case by probing whether a model can correctly identify that the answer to a question is unknown. A common failure mode would be to prefer a prediction of false on unknown truth over a prediction that the answer is unknown.

Source: BIG-bench

Most implemented papers

PaLM: Scaling Language Modeling with Pathways

lucidrains/CoCa-pytorch Google Research 2022

To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM.

Generative ODE Modeling with Known Unknowns

orilinial/GOKU ICLR Workshop DeepDiffEq 2019

A motivating example is intensive care unit patients: the dynamics of vital physiological functions, such as the cardiovascular system with its associated variables (heart rate, cardiac contractility and output and vascular resistance) can be approximately described by a known system of ODEs.

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

allenai/dolma NA 2021

Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.

Training Compute-Optimal Large Language Models

karpathy/llama2.c 29 Mar 2022

We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget.

Known Unknowns: Uncertainty Quality in Bayesian Neural Networks

ramon-oliveira/deepstats 5 Dec 2016

We compare the following candidate neural network models: Maximum Likelihood, Bayesian Dropout, OSBA, and --- for MNIST --- the standard variational approximation.

The division of labor in communication: Speakers help listeners account for asymmetries in visual perspective

hawkrobe/division_of_labor 24 Jul 2018

In Experiment 1, we manipulated the presence or absence of occlusions in a director-matcher task and found that speakers spontaneously produced more informative descriptions to account for "known unknowns" in their partner's private view.