Search Results for author: Matthew Botvinick

Found 35 papers, 14 papers with code

Meta-Learned Models of Cognition

1 code implementation12 Apr 2023 Marcel Binz, Ishita Dasgupta, Akshay Jagadish, Matthew Botvinick, Jane X. Wang, Eric Schulz

Meta-learning is a framework for learning learning algorithms through repeated interactions with an environment as opposed to designing them by hand.

Bayesian Inference Meta-Learning

HCMD-zero: Learning Value Aligned Mechanisms from Data

no code implementations21 Feb 2022 Jan Balaguer, Raphael Koster, Ari Weinstein, Lucy Campbell-Gillingham, Christopher Summerfield, Matthew Botvinick, Andrea Tacchetti

Our analysis shows HCMD-zero consistently makes the mechanism policy more and more likely to be preferred by human participants over the course of training, and that it results in a mechanism with an interpretable and intuitive policy.

SIMONe: View-Invariant, Temporally-Abstracted Object Representations via Unsupervised Video Decomposition

1 code implementation NeurIPS 2021 Rishabh Kabra, Daniel Zoran, Goker Erdogan, Loic Matthey, Antonia Creswell, Matthew Botvinick, Alexander Lerchner, Christopher P. Burgess

Leveraging the shared structure that exists across different scenes, our model learns to infer two sets of latent representations from RGB video input alone: a set of "object" latents, corresponding to the time-invariant, object-level contents of the scene, as well as a set of "frame" latents, corresponding to global time-varying elements such as viewpoint.

Instance Segmentation Object +1

Neural spatio-temporal reasoning with object-centric self-supervised learning

no code implementations1 Jan 2021 David Ding, Felix Hill, Adam Santoro, Matthew Botvinick

Transformer-based language models have proved capable of rudimentary symbolic reasoning, underlining the effectiveness of applying self-attention computations to sets of discrete entities.

Language Modelling Self-Supervised Learning

The Variational Bandwidth Bottleneck: Stochastic Evaluation on an Information Budget

1 code implementation ICLR 2020 Anirudh Goyal, Yoshua Bengio, Matthew Botvinick, Sergey Levine

This is typically the case when we have a standard conditioning input, such as a state observation, and a "privileged" input, which might correspond to the goal of a task, the output of a costly planning algorithm, or communication with another agent.

reinforcement-learning Reinforcement Learning +2

Environmental drivers of systematicity and generalization in a situated agent

no code implementations ICLR 2020 Felix Hill, Andrew Lampinen, Rosalia Schneider, Stephen Clark, Matthew Botvinick, James L. McClelland, Adam Santoro

The question of whether deep neural networks are good at generalising beyond their immediate training experience is of critical importance for learning-based approaches to AI.

Unity

Deep reinforcement learning with relational inductive biases

no code implementations ICLR 2019 Vinicius Zambaldi, David Raposo, Adam Santoro, Victor Bapst, Yujia Li, Igor Babuschkin, Karl Tuyls, David Reichert, Timothy Lillicrap, Edward Lockhart, Murray Shanahan, Victoria Langston, Razvan Pascanu, Matthew Botvinick, Oriol Vinyals, Peter Battaglia

We introduce an approach for augmenting model-free deep reinforcement learning agents with a mechanism for relational reasoning over structured representations, which improves performance, learning efficiency, generalization, and interpretability.

Deep Reinforcement Learning reinforcement-learning +4

Transfer and Exploration via the Information Bottleneck

no code implementations ICLR 2019 Anirudh Goyal, Riashat Islam, DJ Strouse, Zafarali Ahmed, Hugo Larochelle, Matthew Botvinick, Yoshua Bengio, Sergey Levine

In new environments, this model can then identify novel subgoals for further exploration, guiding the agent through a sequence of potential decision states and through new regions of the state space.

Reinforcement Learning

Multi-Object Representation Learning with Iterative Variational Inference

6 code implementations1 Mar 2019 Klaus Greff, Raphaël Lopez Kaufman, Rishabh Kabra, Nick Watters, Chris Burgess, Daniel Zoran, Loic Matthey, Matthew Botvinick, Alexander Lerchner

Human perception is structured around objects which form the basis for our higher-level cognition and impressive systematic generalization abilities.

Object Representation Learning +3

InfoBot: Transfer and Exploration via the Information Bottleneck

no code implementations30 Jan 2019 Anirudh Goyal, Riashat Islam, Daniel Strouse, Zafarali Ahmed, Matthew Botvinick, Hugo Larochelle, Yoshua Bengio, Sergey Levine

In new environments, this model can then identify novel subgoals for further exploration, guiding the agent through a sequence of potential decision states and through new regions of the state space.

Reinforcement Learning

Bayesian Action Decoder for Deep Multi-Agent Reinforcement Learning

1 code implementation4 Nov 2018 Jakob N. Foerster, Francis Song, Edward Hughes, Neil Burch, Iain Dunning, Shimon Whiteson, Matthew Botvinick, Michael Bowling

We present the Bayesian action decoder (BAD), a new multi-agent learning method that uses an approximate Bayesian update to obtain a public belief that conditions on the actions taken by all agents in the environment.

Decoder Multi-agent Reinforcement Learning +4

Relational Forward Models for Multi-Agent Learning

no code implementations ICLR 2019 Andrea Tacchetti, H. Francis Song, Pedro A. M. Mediano, Vinicius Zambaldi, Neil C. Rabinowitz, Thore Graepel, Matthew Botvinick, Peter W. Battaglia

The behavioral dynamics of multi-agent systems have a rich and orderly structure, which can be leveraged to understand these systems, and to improve how artificial agents learn to operate in them.

Relational Deep Reinforcement Learning

7 code implementations5 Jun 2018 Vinicius Zambaldi, David Raposo, Adam Santoro, Victor Bapst, Yujia Li, Igor Babuschkin, Karl Tuyls, David Reichert, Timothy Lillicrap, Edward Lockhart, Murray Shanahan, Victoria Langston, Razvan Pascanu, Matthew Botvinick, Oriol Vinyals, Peter Battaglia

We introduce an approach for deep reinforcement learning (RL) that improves upon the efficiency, generalization capacity, and interpretability of conventional approaches through structured perception and relational reasoning.

Deep Reinforcement Learning reinforcement-learning +4

Been There, Done That: Meta-Learning with Episodic Recall

1 code implementation ICML 2018 Samuel Ritter, Jane. X. Wang, Zeb Kurth-Nelson, Siddhant M. Jayakumar, Charles Blundell, Razvan Pascanu, Matthew Botvinick

Meta-learning agents excel at rapidly learning new tasks from open-ended task distributions; yet, they forget what they learn about each task as soon as the next begins.

Meta-Learning

On the importance of single directions for generalization

1 code implementation ICLR 2018 Ari S. Morcos, David G. T. Barrett, Neil C. Rabinowitz, Matthew Botvinick

Finally, we find that class selectivity is a poor predictor of task importance, suggesting not only that networks which generalize well minimize their dependence on individual units by reducing their selectivity, but also that individually selective units may not be necessary for strong network performance.

Machine Theory of Mind

no code implementations ICML 2018 Neil C. Rabinowitz, Frank Perbet, H. Francis Song, Chiyuan Zhang, S. M. Ali Eslami, Matthew Botvinick

We design a Theory of Mind neural network -- a ToMnet -- which uses meta-learning to build models of the agents it encounters, from observations of their behaviour alone.

Deep Reinforcement Learning Meta-Learning

SCAN: Learning Hierarchical Compositional Visual Concepts

no code implementations ICLR 2018 Irina Higgins, Nicolas Sonnerat, Loic Matthey, Arka Pal, Christopher P. Burgess, Matko Bosnjak, Murray Shanahan, Matthew Botvinick, Demis Hassabis, Alexander Lerchner

SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner.

beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework

6 code implementations ICLR 2017 Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, Alexander Lerchner

Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do.

Disentanglement

One-shot Learning with Memory-Augmented Neural Networks

11 code implementations19 May 2016 Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, Timothy Lillicrap

Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of "one-shot learning."

One-Shot Learning

Goal-directed decision making in prefrontal cortex: a computational framework

no code implementations NeurIPS 2008 Matthew Botvinick, James An

Research in animal learning and behavioral neuroscience has distinguished between two forms of action control: a habit-based form, which relies on stored action values, and a goal-directed form, which forecasts and compares action outcomes based on a model of the environment.

Bayesian Inference Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.