Search Results for author: Brenden M. Lake

Found 39 papers, 20 papers with code

Neural networks that overcome classic challenges through practice

no code implementations14 Oct 2024 Kazuki Irie, Brenden M. Lake

Since the earliest proposals for neural network models of the mind and brain, critics have pointed out key weaknesses in these models compared to human cognitive abilities.

Few-Shot Learning

H-ARC: A Robust Estimate of Human Performance on the Abstraction and Reasoning Corpus Benchmark

no code implementations2 Sep 2024 Solim LeGris, Wai Keen Vong, Brenden M. Lake, Todd M. Gureckis

In this work, we obtain a more robust estimate of human performance by evaluating 1729 humans on the full set of 400 training and 400 evaluation tasks from the original ARC problem set.

ARC Out-of-Distribution Generalization +1

Beyond the Doors of Perception: Vision Transformers Represent Relations Between Objects

1 code implementation22 Jun 2024 Michael A. Lepori, Alexa R. Tartaglini, Wai Keen Vong, Thomas Serre, Brenden M. Lake, Ellie Pavlick

We present a case study of a fundamental, yet surprisingly difficult, relational reasoning task: judging whether two visual entities are the same or different.

Relational Reasoning Visual Reasoning

Goals as Reward-Producing Programs

1 code implementation21 May 2024 Guy Davidson, Graham Todd, Julian Togelius, Todd M. Gureckis, Brenden M. Lake

People are remarkably capable of generating their own goals, beginning with child's play and continuing into adulthood.

Diversity Program Synthesis

Compositional learning of functions in humans and machines

no code implementations18 Mar 2024 Yanli Zhou, Brenden M. Lake, Adina Williams

Extending the investigation into the visual domain, we developed a function learning paradigm to explore the capacity of humans and neural network models in learning and reasoning with compositional functions under varied interaction conditions.

Meta-Learning

A systematic investigation of learnability from single child linguistic input

1 code implementation12 Feb 2024 Yulu Qin, Wentao Wang, Brenden M. Lake

However, a significant gap exists between the training data for these models and the linguistic input a child receives.

Self-supervised learning of video representations from a child's perspective

1 code implementation1 Feb 2024 A. Emin Orhan, Wentao Wang, Alex N. Wang, Mengye Ren, Brenden M. Lake

These results suggest that important temporal aspects of a child's internal model of the world may be learnable from their visual experience using highly generic learning algorithms and without strong inductive biases.

Object Recognition Self-Supervised Learning

Deep Neural Networks Can Learn Generalizable Same-Different Visual Relations

no code implementations14 Oct 2023 Alexa R. Tartaglini, Sheridan Feucht, Michael A. Lepori, Wai Keen Vong, Charles Lovering, Brenden M. Lake, Ellie Pavlick

Much of this prior work focuses on training convolutional neural networks to classify images of two same or two different abstract shapes, testing generalization on within-distribution stimuli.

Object Recognition Out-of-Distribution Generalization

Compositional diversity in visual concept learning

no code implementations30 May 2023 Yanli Zhou, Reuben Feinman, Brenden M. Lake

In few shot classification tasks, we find that people and the program induction model can make a range of meaningful compositional generalizations, with the model providing a strong account of the experimental data as well as interpretable parameters that reveal human assumptions about the factors invariant to category membership (here, to rotation and changing part attachment).

Diversity Program induction

A Developmentally-Inspired Examination of Shape versus Texture Bias in Machines

1 code implementation16 Feb 2022 Alexa R. Tartaglini, Wai Keen Vong, Brenden M. Lake

In this work, we re-examine the inductive biases of neural networks by adapting the stimuli and procedure from Geirhos et al. (2019) to more closely follow the developmental paradigm and test on a wide range of pre-trained neural networks.

Improving Coherence and Consistency in Neural Sequence Models with Dual-System, Neuro-Symbolic Reasoning

no code implementations NeurIPS 2021 Maxwell Nye, Michael Henry Tessler, Joshua B. Tenenbaum, Brenden M. Lake

Human reasoning can often be understood as an interplay between two systems: the intuitive and associative ("System 1") and the deliberative and logical ("System 2").

Instruction Following Logical Reasoning +1

Flexible Compositional Learning of Structured Visual Concepts

no code implementations20 May 2021 Yanli Zhou, Brenden M. Lake

Humans are highly efficient learners, with the ability to grasp the meaning of a new concept from just a few examples.

Program induction

Fast and flexible: Human program induction in abstract reasoning tasks

no code implementations10 Mar 2021 Aysja Johnson, Wai Keen Vong, Brenden M. Lake, Todd M. Gureckis

The Abstraction and Reasoning Corpus (ARC) is a challenging program induction dataset that was recently proposed by Chollet (2019).

ARC Program induction

Baby Intuitions Benchmark (BIB): Discerning the goals, preferences, and actions of others

no code implementations NeurIPS 2021 Kanishk Gandhi, Gala Stojnic, Brenden M. Lake, Moira R. Dillon

To achieve human-like common sense about everyday life, machine learning systems must understand and reason about the goals, preferences, and actions of other agents in the environment.

Common Sense Reasoning

Word meaning in minds and machines

no code implementations4 Aug 2020 Brenden M. Lake, Gregory L. Murphy

Machines have achieved a broad and growing set of linguistic competencies, thanks to recent progress in Natural Language Processing (NLP).

Word Similarity

Learning Task-General Representations with Generative Neuro-Symbolic Modeling

1 code implementation ICLR 2021 Reuben Feinman, Brenden M. Lake

We develop a generative neuro-symbolic (GNS) model of handwritten character concepts that uses the control flow of a probabilistic program, coupled with symbolic stroke primitives and a symbolic image renderer, to represent the causal and compositional processes by which characters are formed.

Generating new concepts with hybrid neuro-symbolic models

no code implementations19 Mar 2020 Reuben Feinman, Brenden M. Lake

A second tradition has emphasized statistical knowledge, viewing conceptual knowledge as an emerging from the rich correlational structure captured by training neural networks and other statistical models.

Learning word-referent mappings and concepts from raw inputs

no code implementations12 Mar 2020 Wai Keen Vong, Brenden M. Lake

How do children learn correspondences between the language and the world from noisy, ambiguous, naturalistic input?

A Benchmark for Systematic Generalization in Grounded Language Understanding

4 code implementations NeurIPS 2020 Laura Ruis, Jacob Andreas, Marco Baroni, Diane Bouchacourt, Brenden M. Lake

In this paper, we introduce a new benchmark, gSCAN, for evaluating compositional generalization in situated language understanding.

Systematic Generalization

Investigating Simple Object Representations in Model-Free Deep Reinforcement Learning

no code implementations16 Feb 2020 Guy Davidson, Brenden M. Lake

We explore the benefits of augmenting state-of-the-art model-free deep reinforcement algorithms with simple object representations.

Deep Reinforcement Learning Object +2

Modeling question asking using neural program generation

1 code implementation23 Jul 2019 Ziyun Wang, Brenden M. Lake

People ask questions that are far richer, more informative, and more creative than current AI systems.

Decoder Question Generation +4

Improving the robustness of ImageNet classifiers using elements of human visual cognition

1 code implementation20 Jun 2019 A. Emin Orhan, Brenden M. Lake

As reported in previous work, we show that an explicit episodic memory improves the robustness of image recognition models against small-norm adversarial perturbations under some threat models.

Clustering Retrieval

Compositional generalization through meta sequence-to-sequence learning

1 code implementation NeurIPS 2019 Brenden M. Lake

People can learn a new concept and use it compositionally, understanding how to "blicket twice" after learning how to "blicket."

People infer recursive visual concepts from just a few examples

no code implementations17 Apr 2019 Brenden M. Lake, Steven T. Piantadosi

Machine learning has made major advances in categorizing objects in images, yet the best algorithms miss important aspects of how people learn and think about categories.

Object Recognition

Learning a smooth kernel regularizer for convolutional neural networks

1 code implementation5 Mar 2019 Reuben Feinman, Brenden M. Lake

We propose a smooth kernel regularizer that encourages spatial correlations in convolution kernel weights.

L2 Regularization Object Recognition

The Omniglot challenge: a 3-year progress report

7 code implementations9 Feb 2019 Brenden M. Lake, Ruslan Salakhutdinov, Joshua B. Tenenbaum

Three years ago, we released the Omniglot dataset for one-shot learning, along with five challenge tasks and a computational model that addresses these tasks.

General Classification One-Shot Learning

Human few-shot learning of compositional instructions

2 code implementations14 Jan 2019 Brenden M. Lake, Tal Linzen, Marco Baroni

There have been striking recent improvements in machine learning for natural language processing, yet the best algorithms require vast amounts of experience and struggle to generalize new concepts in compositional ways.

Few-Shot Learning

Rearranging the Familiar: Testing Compositional Generalization in Recurrent Networks

no code implementations WS 2018 João Loula, Marco Baroni, Brenden M. Lake

Systematic compositionality is the ability to recombine meaningful units with regular and predictable outcomes, and it's seen as key to humans' capacity for generalization in language.

Learning Inductive Biases with Simple Neural Networks

1 code implementation8 Feb 2018 Reuben Feinman, Brenden M. Lake

People use rich prior knowledge about the world in order to efficiently learn new concepts.

Inductive Bias Object Recognition

Question Asking as Program Generation

1 code implementation NeurIPS 2017 Anselm Rothe, Brenden M. Lake, Todd M. Gureckis

A hallmark of human intelligence is the ability to ask rich, creative, and revealing questions.

Informativeness

The Emergence of Organizing Structure in Conceptual Representation

1 code implementation28 Nov 2016 Brenden M. Lake, Neil D. Lawrence, Joshua B. Tenenbaum

While this approach can learn intuitive organizations, including a tree for animals and a ring for the color circle, it assumes a strong inductive bias that considers only these particular forms, and each form is explicitly provided as initial knowledge.

Inductive Bias

Building Machines That Learn and Think Like People

no code implementations1 Apr 2016 Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman

Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people.

Board Games Object Recognition

Softstar: Heuristic-Guided Probabilistic Inference

no code implementations NeurIPS 2015 Mathew Monfort, Brenden M. Lake, Brian Ziebart, Patrick Lucey, Josh Tenenbaum

Recent machine learning methods for sequential behavior prediction estimate the motives of behavior rather than the behavior itself.

BIG-bench Machine Learning

One-shot learning by inverting a compositional causal process

no code implementations NeurIPS 2013 Brenden M. Lake, Ruslan R. Salakhutdinov, Josh Tenenbaum

People can learn a new visual class from just one example, yet machine learning algorithms typically require hundreds or thousands of examples to tackle the same problems.

General Classification One-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.