no code implementations • 14 Oct 2024 • Kazuki Irie, Brenden M. Lake
Since the earliest proposals for neural network models of the mind and brain, critics have pointed out key weaknesses in these models compared to human cognitive abilities.
no code implementations • 2 Sep 2024 • Solim LeGris, Wai Keen Vong, Brenden M. Lake, Todd M. Gureckis
In this work, we obtain a more robust estimate of human performance by evaluating 1729 humans on the full set of 400 training and 400 evaluation tasks from the original ARC problem set.
1 code implementation • 22 Jun 2024 • Michael A. Lepori, Alexa R. Tartaglini, Wai Keen Vong, Thomas Serre, Brenden M. Lake, Ellie Pavlick
We present a case study of a fundamental, yet surprisingly difficult, relational reasoning task: judging whether two visual entities are the same or different.
1 code implementation • 21 May 2024 • Guy Davidson, Graham Todd, Julian Togelius, Todd M. Gureckis, Brenden M. Lake
People are remarkably capable of generating their own goals, beginning with child's play and continuing into adulthood.
no code implementations • 18 Mar 2024 • Yanli Zhou, Brenden M. Lake, Adina Williams
Extending the investigation into the visual domain, we developed a function learning paradigm to explore the capacity of humans and neural network models in learning and reasoning with compositional functions under varied interaction conditions.
1 code implementation • 12 Feb 2024 • Yulu Qin, Wentao Wang, Brenden M. Lake
However, a significant gap exists between the training data for these models and the linguistic input a child receives.
1 code implementation • 1 Feb 2024 • A. Emin Orhan, Wentao Wang, Alex N. Wang, Mengye Ren, Brenden M. Lake
These results suggest that important temporal aspects of a child's internal model of the world may be learnable from their visual experience using highly generic learning algorithms and without strong inductive biases.
no code implementations • 14 Oct 2023 • Alexa R. Tartaglini, Sheridan Feucht, Michael A. Lepori, Wai Keen Vong, Charles Lovering, Brenden M. Lake, Ellie Pavlick
Much of this prior work focuses on training convolutional neural networks to classify images of two same or two different abstract shapes, testing generalization on within-distribution stimuli.
no code implementations • 30 May 2023 • Yanli Zhou, Reuben Feinman, Brenden M. Lake
In few shot classification tasks, we find that people and the program induction model can make a range of meaningful compositional generalizations, with the model providing a strong account of the experimental data as well as interpretable parameters that reveal human assumptions about the factors invariant to category membership (here, to rotation and changing part attachment).
1 code implementation • 24 May 2023 • A. Emin Orhan, Brenden M. Lake
Young children develop sophisticated internal models of the world based on their visual experience.
1 code implementation • 16 Feb 2022 • Alexa R. Tartaglini, Wai Keen Vong, Brenden M. Lake
In this work, we re-examine the inductive biases of neural networks by adapting the stimuli and procedure from Geirhos et al. (2019) to more closely follow the developmental paradigm and test on a wide range of pre-trained neural networks.
no code implementations • NeurIPS 2021 • Maxwell Nye, Michael Henry Tessler, Joshua B. Tenenbaum, Brenden M. Lake
Human reasoning can often be understood as an interplay between two systems: the intuitive and associative ("System 1") and the deliberative and logical ("System 2").
no code implementations • 20 May 2021 • Yanli Zhou, Brenden M. Lake
Humans are highly efficient learners, with the ability to grasp the meaning of a new concept from just a few examples.
no code implementations • 10 Mar 2021 • Aysja Johnson, Wai Keen Vong, Brenden M. Lake, Todd M. Gureckis
The Abstraction and Reasoning Corpus (ARC) is a challenging program induction dataset that was recently proposed by Chollet (2019).
no code implementations • NeurIPS 2021 • Kanishk Gandhi, Gala Stojnic, Brenden M. Lake, Moira R. Dillon
To achieve human-like common sense about everyday life, machine learning systems must understand and reason about the goals, preferences, and actions of other agents in the environment.
no code implementations • 4 Aug 2020 • Brenden M. Lake, Gregory L. Murphy
Machines have achieved a broad and growing set of linguistic competencies, thanks to recent progress in Natural Language Processing (NLP).
1 code implementation • NeurIPS 2020 • A. Emin Orhan, Vaibhav V. Gupta, Brenden M. Lake
Within months of birth, children develop meaningful expectations about the world around them.
1 code implementation • ICLR 2021 • Reuben Feinman, Brenden M. Lake
We develop a generative neuro-symbolic (GNS) model of handwritten character concepts that uses the control flow of a probabilistic program, coupled with symbolic stroke primitives and a symbolic image renderer, to represent the causal and compositional processes by which characters are formed.
no code implementations • 19 Mar 2020 • Reuben Feinman, Brenden M. Lake
A second tradition has emphasized statistical knowledge, viewing conceptual knowledge as an emerging from the rich correlational structure captured by training neural networks and other statistical models.
1 code implementation • NeurIPS 2020 • Maxwell I. Nye, Armando Solar-Lezama, Joshua B. Tenenbaum, Brenden M. Lake
Many aspects of human reasoning, including language, require learning rules from very little data.
no code implementations • 12 Mar 2020 • Wai Keen Vong, Brenden M. Lake
How do children learn correspondences between the language and the world from noisy, ambiguous, naturalistic input?
4 code implementations • NeurIPS 2020 • Laura Ruis, Jacob Andreas, Marco Baroni, Diane Bouchacourt, Brenden M. Lake
In this paper, we introduce a new benchmark, gSCAN, for evaluating compositional generalization in situated language understanding.
no code implementations • 16 Feb 2020 • Guy Davidson, Brenden M. Lake
We explore the benefits of augmenting state-of-the-art model-free deep reinforcement algorithms with simple object representations.
1 code implementation • 23 Jul 2019 • Ziyun Wang, Brenden M. Lake
People ask questions that are far richer, more informative, and more creative than current AI systems.
no code implementations • NeurIPS 2020 • Kanishk Gandhi, Brenden M. Lake
Strong inductive biases allow children to learn in fast and adaptable ways.
1 code implementation • 20 Jun 2019 • A. Emin Orhan, Brenden M. Lake
As reported in previous work, we show that an explicit episodic memory improves the robustness of image recognition models against small-norm adversarial perturbations under some threat models.
1 code implementation • NeurIPS 2019 • Brenden M. Lake
People can learn a new concept and use it compositionally, understanding how to "blicket twice" after learning how to "blicket."
no code implementations • 17 Apr 2019 • Brenden M. Lake, Steven T. Piantadosi
Machine learning has made major advances in categorizing objects in images, yet the best algorithms miss important aspects of how people learn and think about categories.
1 code implementation • 5 Mar 2019 • Reuben Feinman, Brenden M. Lake
We propose a smooth kernel regularizer that encourages spatial correlations in convolution kernel weights.
7 code implementations • 9 Feb 2019 • Brenden M. Lake, Ruslan Salakhutdinov, Joshua B. Tenenbaum
Three years ago, we released the Omniglot dataset for one-shot learning, along with five challenge tasks and a computational model that addresses these tasks.
2 code implementations • 14 Jan 2019 • Brenden M. Lake, Tal Linzen, Marco Baroni
There have been striking recent improvements in machine learning for natural language processing, yet the best algorithms require vast amounts of experience and struggle to generalize new concepts in compositional ways.
no code implementations • WS 2018 • João Loula, Marco Baroni, Brenden M. Lake
Systematic compositionality is the ability to recombine meaningful units with regular and predictable outcomes, and it's seen as key to humans' capacity for generalization in language.
1 code implementation • 8 Feb 2018 • Reuben Feinman, Brenden M. Lake
People use rich prior knowledge about the world in order to efficiently learn new concepts.
1 code implementation • NeurIPS 2017 • Anselm Rothe, Brenden M. Lake, Todd M. Gureckis
A hallmark of human intelligence is the ability to ask rich, creative, and revealing questions.
7 code implementations • ICML 2018 • Brenden M. Lake, Marco Baroni
Humans can understand and produce new utterances effortlessly, thanks to their compositional skills.
1 code implementation • 28 Nov 2016 • Brenden M. Lake, Neil D. Lawrence, Joshua B. Tenenbaum
While this approach can learn intuitive organizations, including a tree for animals and a ring for the color circle, it assumes a strong inductive bias that considers only these particular forms, and each form is explicitly provided as initial knowledge.
no code implementations • 1 Apr 2016 • Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people.
no code implementations • NeurIPS 2015 • Mathew Monfort, Brenden M. Lake, Brian Ziebart, Patrick Lucey, Josh Tenenbaum
Recent machine learning methods for sequential behavior prediction estimate the motives of behavior rather than the behavior itself.
no code implementations • NeurIPS 2013 • Brenden M. Lake, Ruslan R. Salakhutdinov, Josh Tenenbaum
People can learn a new visual class from just one example, yet machine learning algorithms typically require hundreds or thousands of examples to tackle the same problems.