no code implementations • 31 Oct 2024 • Declan Campbell, Sunayana Rane, Tyler Giallanza, Nicolò De Sabbata, Kia Ghods, Amogh Joshi, Alexander Ku, Steven M. Frankland, Thomas L. Griffiths, Jonathan D. Cohen, Taylor W. Webb
Recent work has documented striking heterogeneity in the performance of state-of-the-art vision language models (VLMs), including both multimodal language models and text-to-image models.
2 code implementations • 6 Mar 2024 • Shanka Subhra Mondal, Jonathan D. Cohen, Taylor W. Webb
Abstract visual reasoning is a characteristically human ability, allowing the identification of relational patterns that are abstracted away from object features, and the systematic generalization of those patterns to unseen problems.
no code implementations • 28 Feb 2024 • Declan Campbell, Jonathan D. Cohen
The human cognitive system exhibits remarkable flexibility and generalization capabilities, partly due to its ability to form low-dimensional, compositional representations of the environment.
no code implementations • 6 Feb 2024 • Declan Campbell, Sreejan Kumar, Tyler Giallanza, Thomas L. Griffiths, Jonathan D. Cohen
Humans possess a remarkable capacity to recognize and manipulate abstract structure, which is especially apparent in the domain of geometry.
no code implementations • 29 Sep 2023 • Declan Campbell, Sreejan Kumar, Tyler Giallanza, Jonathan D. Cohen, Thomas L. Griffiths
Uniquely among primates, humans possess a remarkable capacity to recognize and manipulate abstract structure in the service of task goals across a broad range of behaviors.
no code implementations • 12 Sep 2023 • Taylor W. Webb, Steven M. Frankland, Awni Altabaa, Simon Segert, Kamesh Krishnamurthy, Declan Campbell, Jacob Russin, Tyler Giallanza, Zack Dulberg, Randall O'Reilly, John Lafferty, Jonathan D. Cohen
A central challenge for cognitive science is to explain how abstract concepts are acquired from limited experience.
no code implementations • 14 Jul 2023 • Ryan Pyle, Sebastian Musslick, Jonathan D. Cohen, Ankit B. Patel
A key property of neural networks (both biological and artificial) is how they learn to represent and manipulate input information in order to solve a task.
1 code implementation • NeurIPS 2023 • Taylor W. Webb, Shanka Subhra Mondal, Jonathan D. Cohen
Human visual reasoning is characterized by an ability to identify abstract patterns from only a small number of examples, and to systematically generalize those patterns to novel inputs.
1 code implementation • 28 May 2023 • Shanka Subhra Mondal, Steven Frankland, Taylor Webb, Jonathan D. Cohen
Deep neural networks have made tremendous gains in emulating human-like intelligence, and have been used increasingly as ways of understanding how the brain may solve the complex computational problems on which this relies.
1 code implementation • 3 Mar 2023 • Shanka Subhra Mondal, Taylor Webb, Jonathan D. Cohen
These results suggest that an inductive bias for object-centric processing may be a key component of abstract visual reasoning, obviating the need for problem-specific inductive biases.
1 code implementation • 23 May 2022 • Sreejan Kumar, Carlos G. Correa, Ishita Dasgupta, Raja Marjieh, Michael Y. Hu, Robert D. Hawkins, Nathaniel D. Daw, Jonathan D. Cohen, Karthik Narasimhan, Thomas L. Griffiths
Co-training on these representations result in more human-like behavior in downstream meta-reinforcement learning agents than less abstract controls (synthetic language descriptions, program induction without learned primitives), suggesting that the abstraction supported by these representations is key.
no code implementations • 13 Apr 2022 • Zack Dulberg, Rachit Dubey, Isabel M. Berwian, Jonathan D. Cohen
The problem of balancing conflicting needs is fundamental to intelligence.
1 code implementation • 4 Apr 2022 • Sreejan Kumar, Ishita Dasgupta, Nathaniel D. Daw, Jonathan D. Cohen, Thomas L. Griffiths
However, because neural networks are hard to interpret, it can be difficult to tell whether agents have learned the underlying abstraction, or alternatively statistical patterns that are characteristic of that abstraction.
no code implementations • 14 Jun 2021 • Simon N. Segert, Jonathan D. Cohen
Understanding how agents learn to generalize -- and, in particular, to extrapolate -- in high-dimensional, naturalistic environments remains a challenge for both machine learning and the study of biological agents.
no code implementations • 14 May 2021 • Mark K. Ho, David Abel, Carlos G. Correa, Michael L. Littman, Jonathan D. Cohen, Thomas L. Griffiths
We propose a computational account of this simplification process and, in a series of pre-registered behavioral experiments, show that it is subject to online cognitive control and that people optimally balance the complexity of a task representation and its utility for planning and acting.
no code implementations • 27 Jan 2021 • Arthur Prat-Carrabin, Robert C. Wilson, Jonathan D. Cohen, Rava Azeredo da Silveira
We show that humans adapt their inference process to fine aspects of the temporal structure in the statistics of stimuli.
2 code implementations • ICLR 2021 • Taylor W. Webb, Ishan Sinha, Jonathan D. Cohen
A key aspect of human intelligence is the ability to infer abstract rules directly from high-dimensional sensory data, and to do so given only a limited amount of training experience.
no code implementations • 13 Dec 2020 • Ishan Sinha, Taylor W. Webb, Jonathan D. Cohen
Further, we introduce the Emergent Symbol Binding Network (ESBN), a recurrent neural network model that learns to use an external memory as a binding mechanism.
no code implementations • 2 Dec 2020 • Jonathan D. Cohen
This note describes a simple score to indicate the effectiveness of mitigation against infections of COVID-19 as observed by new case counts.
1 code implementation • ICLR 2021 • Sreejan Kumar, Ishita Dasgupta, Jonathan D. Cohen, Nathaniel D. Daw, Thomas L. Griffiths
We then introduce a novel approach to constructing a "null task distribution" with the same statistical complexity as this structured task distribution but without the explicit rule-based structure used to generate the structured task.
no code implementations • 20 Jul 2020 • Sachin Ravi, Sebastian Musslick, Maia Hamin, Theodore L. Willke, Jonathan D. Cohen
The terms multi-task learning and multitasking are easily confused.
1 code implementation • ICML 2020 • Taylor W. Webb, Zachary Dulberg, Steven M. Frankland, Alexander A. Petrov, Randall C. O'Reilly, Jonathan D. Cohen
Extrapolation -- the ability to make inferences that go beyond the scope of one's experiences -- is a hallmark of human intelligence.
no code implementations • 13 Feb 2020 • Mark K. Ho, David Abel, Jonathan D. Cohen, Michael L. Littman, Thomas L. Griffiths
Thus, people should plan their actions, but they should also be smart about how they deploy resources used for planning their actions.
no code implementations • 15 Oct 2019 • Marius Cătălin Iordan, Tyler Giallanza, Cameron T. Ellis, Nicole M. Beckage, Jonathan D. Cohen
Applying machine learning algorithms to large-scale, text-based corpora (embeddings) presents a unique opportunity to investigate at scale how human semantic knowledge is organized and how people use it to judge fundamental relationships, such as similarity between concepts.
no code implementations • NeurIPS 2017 • Noga Alon, Daniel Reichman, Igor Shinkar, Tal Wagner, Sebastian Musslick, Jonathan D. Cohen, Tom Griffiths, Biswadip Dey, Kayhan Ozcimder
A key feature of neural network architectures is their ability to support the simultaneous interaction among large numbers of units in the learning and processing of representations.
1 code implementation • 8 Nov 2017 • Michael Shvartsman, Narayanan Sundaram, Mikio C. Aoi, Adam Charles, Theodore C. Wilke, Jonathan D. Cohen
We show how the matrix-variate normal (MN) formalism can unify some of these methods into a single framework.
1 code implementation • NeurIPS 2015 • Michael Shvartsman, Vaibhav Srivastava, Jonathan D. Cohen
We also show how the model generalizes re- cent work on the control of attention in the Flanker task (Yu et al., 2009).
no code implementations • NeurIPS 2008 • Michael T. Todd, Yael Niv, Jonathan D. Cohen
Working memory is a central topic of cognitive neuroscience because it is critical for solving real world problems in which information from multiple temporally distant sources must be combined to generate appropriate behavior.
no code implementations • NeurIPS 2008 • Angela J. Yu, Jonathan D. Cohen
In a variety of behavioral tasks, subjects exhibit an automatic and apparently sub-optimal sequential effect: they respond more rapidly and accurately to a stimulus if it reinforces a local pattern in stimulus history, such as a string of repetitions or alternations, compared to when it violates such a pattern.