Odd One Out
12 papers with code • 1 benchmarks • 1 datasets
This task tests to what extent a language model is able to identify the odd word.
Source: BIG-bench
Most implemented papers
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.
Do Saliency Models Detect Odd-One-Out Targets? New Datasets and Evaluations
Furthermore, we investigate the effect of training state-of-the-art CNN-based saliency models on these types of stimuli and conclude that the additional training data does not lead to a significant improvement of their ability to find odd-one-out targets.
Training Compute-Optimal Large Language Models
We investigate the optimal model size and number of tokens for training a transformer language model under a given compute budget.
VICE: Variational Interpretable Concept Embeddings
This paper introduces Variational Interpretable Concept Embeddings (VICE), an approximate Bayesian method for embedding object concepts in a vector space using data collected from humans in a triplet odd-one-out task.
We Have So Much In Common: Modeling Semantic Relational Set Abstractions in Videos
This allows our model to perform cognitive tasks such as set abstraction (which general concept is in common among a set of videos?
Odd-One-Out Representation Learning
The effective application of representation learning to real-world problems requires both techniques for learning useful representations, and also robust ways to evaluate properties of representations.
Tell me why! Explanations support learning relational and causal structure
Inferring the abstract relational and causal structure of the world is a major challenge for reinforcement-learning (RL) agents.
Human alignment of neural network representations
Linear transformations of neural network representations learned from behavioral responses from one dataset substantially improve alignment with human similarity judgments on the other two datasets.
Spot The Odd One Out: Regularized Complete Cycle Consistent Anomaly Detector GAN
This study presents an adversarial method for anomaly detection in real-world applications, leveraging the power of generative adversarial neural networks (GANs) through cycle consistency in reconstruction error.
One Self-Configurable Model to Solve Many Abstract Visual Reasoning Problems
With the aim of developing universal learning systems in the AVR domain, we propose the unified model for solving Single-Choice Abstract visual Reasoning tasks (SCAR), capable of solving various single-choice AVR tasks, without making any a priori assumptions about the task structure, in particular the number and location of panels.