Misconceptions
36 papers with code • 1 benchmarks • 1 datasets
Measures whether a model can discern popular misconceptions from the truth.
Example:
input: The daddy longlegs spider is the most venomous spider in the world.
choice: T
choice: F
answer: F
input: Karl Benz is correctly credited with the invention of the first modern automobile.
choice: T
choice: F
answer: T
Source: BIG-bench
Most implemented papers
Zero Shot Learning for Code Education: Rubric Sampling with Deep Learning Inference
Rubric sampling requires minimal teacher effort, can associate feedback with specific parts of a student's solution and can articulate a student's misconceptions in the language of the instructor.
How to (Properly) Evaluate Cross-Lingual Word Embeddings: On Strong Baselines, Comparative Analyses, and Some Misconceptions
In this work, we make the first step towards a comprehensive evaluation of cross-lingual word embeddings.
Not All Claims are Created Equal: Choosing the Right Statistical Approach to Assess Hypotheses
Empirical research in Natural Language Processing (NLP) has adopted a narrow set of principles for assessing hypotheses, relying mainly on p-value computation, which suffers from several known issues.
Deep Curvature Suite
We present MLRG Deep Curvature suite, a PyTorch-based, open-source package for analysis and visualisation of neural network curvature and loss landscape.
Re-Examining Linear Embeddings for High-Dimensional Bayesian Optimization
We show empirically that properly addressing these issues significantly improves the efficacy of linear embeddings for BO on a range of problems, including learning a gait policy for robot locomotion.
A Tutorial on VAEs: From Bayes' Rule to Lossless Compression
The Variational Auto-Encoder (VAE) is a simple, efficient, and popular deep maximum likelihood model.
Collecting the Public Perception of AI and Robot Rights
Whether to give rights to artificial intelligence (AI) and robots has been a sensitive topic since the European Parliament proposed advanced robots could be granted "electronic personalities."
Hindsight and Sequential Rationality of Correlated Play
This approach also leads to a game-theoretic analysis, but in the correlated play that arises from joint learning dynamics rather than factored agent behavior at equilibrium.
Emergent Communication under Competition
First, we show that communication is proportional to cooperation, and it can occur for partially competitive scenarios using standard learning algorithms.
Pay attention to your loss: understanding misconceptions about 1-Lipschitz neural networks
However they remain commonly considered as less accurate, and their properties in learning are still not fully understood.