Search Results for author: Timothy Lillicrap

Found 52 papers, 30 papers with code

Equilibrium Aggregation: Encoding Sets via Optimization

no code implementations25 Feb 2022 Sergey Bartunov, Fabian B. Fuchs, Timothy Lillicrap

Processing sets or other unordered, potentially variable-sized inputs in neural networks is usually handled by \emph{aggregating} a number of input tensors into a single representation.

Molecular Property Prediction

The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning

no code implementations NeurIPS 2021 Shahab Bakhtiari, Patrick Mineault, Timothy Lillicrap, Christopher Pack, Blake Richards

We show that when we train a deep neural network architecture with two parallel pathways using a self-supervised predictive loss function, we can outperform other models in fitting mouse visual cortex.

Symbolic Behaviour in Artificial Intelligence

no code implementations5 Feb 2021 Adam Santoro, Andrew Lampinen, Kory Mathewson, Timothy Lillicrap, David Raposo

This approach will allow for AI to interpret something as symbolic on its own rather than simply manipulate things that are only symbols to human onlookers, and thus will ultimately lead to AI with more human-like symbolic fluency.

Mastering Atari with Discrete World Models

6 code implementations ICLR 2021 Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, Jimmy Ba

The world model uses discrete representations and is trained separately from the policy.

Ranked #3 on Atari Games on Atari 2600 Skiing (using extra training data)

Atari Games

Beyond Tabula-Rasa: a Modular Reinforcement Learning Approach for Physically Embedded 3D Sokoban

no code implementations3 Oct 2020 Peter Karkus, Mehdi Mirza, Arthur Guez, Andrew Jaegle, Timothy Lillicrap, Lars Buesing, Nicolas Heess, Theophane Weber

We explore whether integrated tasks like Mujoban can be solved by composing RL modules together in a sense-plan-act hierarchy, where modules have well-defined roles similarly to classic robot architectures.

reinforcement-learning

dm_control: Software and Tasks for Continuous Control

1 code implementation22 Jun 2020 Yuval Tassa, Saran Tunyasuvunakool, Alistair Muldal, Yotam Doron, Piotr Trochim, Si-Qi Liu, Steven Bohez, Josh Merel, Tom Erez, Timothy Lillicrap, Nicolas Heess

The dm_control software package is a collection of Python libraries and task suites for reinforcement learning agents in an articulated-body simulation.

Continuous Control reinforcement-learning

Automated curriculum generation through setter-solver interactions

no code implementations ICLR 2020 Sebastien Racaniere, Andrew Lampinen, Adam Santoro, David Reichert, Vlad Firoiu, Timothy Lillicrap

We demonstrate the success of our approach in rich but sparsely rewarding 2D and 3D environments, where an agent is tasked to achieve a single goal selected from a set of possible goals that varies between episodes, and identify challenges for future work.

Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model

15 code implementations19 Nov 2019 Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent SIfre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy Lillicrap, David Silver

When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.

Atari Games Game of Chess +2

Deep Compressed Sensing

1 code implementation16 May 2019 Yan Wu, Mihaela Rosca, Timothy Lillicrap

CS is flexible and data efficient, but its application has been restricted by the strong assumption of sparsity and costly reconstruction process.

Meta-Learning

Reliable Uncertainty Estimates in Deep Neural Networks using Noise Contrastive Priors

no code implementations ICLR 2019 Danijar Hafner, Dustin Tran, Timothy Lillicrap, Alex Irpan, James Davidson

NCPs are compatible with any model that can output uncertainty estimates, are easy to scale, and yield reliable uncertainty estimates throughout training.

Active Learning

Deep reinforcement learning with relational inductive biases

no code implementations ICLR 2019 Vinicius Zambaldi, David Raposo, Adam Santoro, Victor Bapst, Yujia Li, Igor Babuschkin, Karl Tuyls, David Reichert, Timothy Lillicrap, Edward Lockhart, Murray Shanahan, Victoria Langston, Razvan Pascanu, Matthew Botvinick, Oriol Vinyals, Peter Battaglia

We introduce an approach for augmenting model-free deep reinforcement learning agents with a mechanism for relational reasoning over structured representations, which improves performance, learning efficiency, generalization, and interpretability.

reinforcement-learning Relational Reasoning +2

Deep Learning without Weight Transport

3 code implementations NeurIPS 2019 Mohamed Akrout, Collin Wilson, Peter C. Humphreys, Timothy Lillicrap, Douglas Tweed

Current algorithms for deep learning probably cannot run in the brain because they rely on weight transport, where forward-path neurons transmit their synaptic weights to a feedback path, in a way that is likely impossible biologically.

Learning to Make Analogies by Contrasting Abstract Relational Structure

2 code implementations ICLR 2019 Felix Hill, Adam Santoro, David G. T. Barrett, Ari S. Morcos, Timothy Lillicrap

Here, we study how analogical reasoning can be induced in neural networks that learn to perceive and reason about raw visual data.

Learning Attractor Dynamics for Generative Memory

1 code implementation NeurIPS 2018 Yan Wu, Greg Wayne, Karol Gregor, Timothy Lillicrap

Based on the idea of memory writing as inference, as proposed in the Kanerva Machine, we show that a likelihood-based Lyapunov function emerges from maximising the variational lower-bound of a generative memory.

Episodic Curiosity through Reachability

1 code implementation ICLR 2019 Nikolay Savinov, Anton Raichuk, Raphaël Marinier, Damien Vincent, Marc Pollefeys, Timothy Lillicrap, Sylvain Gelly

One solution to this problem is to allow the agent to create rewards for itself - thus making rewards dense and more suitable for learning.

Noise Contrastive Priors for Functional Uncertainty

2 code implementations ICLR 2019 Danijar Hafner, Dustin Tran, Timothy Lillicrap, Alex Irpan, James Davidson

NCPs are compatible with any model that can output uncertainty estimates, are easy to scale, and yield reliable uncertainty estimates throughout training.

Active Learning

Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures

1 code implementation NeurIPS 2018 Sergey Bartunov, Adam Santoro, Blake A. Richards, Luke Marris, Geoffrey E. Hinton, Timothy Lillicrap

Here we present results on scaling up biologically motivated models of deep learning on datasets which need deep networks with appropriate architectures to achieve good performance.

Measuring abstract reasoning in neural networks

2 code implementations ICML 2018 David G. T. Barrett, Felix Hill, Adam Santoro, Ari S. Morcos, Timothy Lillicrap

To succeed at this challenge, models must cope with various generalisation `regimes' in which the training and test data differ in clearly-defined ways.

Relational Deep Reinforcement Learning

7 code implementations5 Jun 2018 Vinicius Zambaldi, David Raposo, Adam Santoro, Victor Bapst, Yujia Li, Igor Babuschkin, Karl Tuyls, David Reichert, Timothy Lillicrap, Edward Lockhart, Murray Shanahan, Victoria Langston, Razvan Pascanu, Matthew Botvinick, Oriol Vinyals, Peter Battaglia

We introduce an approach for deep reinforcement learning (RL) that improves upon the efficiency, generalization capacity, and interpretability of conventional approaches through structured perception and relational reasoning.

reinforcement-learning Relational Reasoning +2

The Kanerva Machine: A Generative Distributed Memory

no code implementations ICLR 2018 Yan Wu, Greg Wayne, Alex Graves, Timothy Lillicrap

We present an end-to-end trained memory system that quickly adapts to new data and generates samples like them.

DeepMind Control Suite

4 code implementations2 Jan 2018 Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy Lillicrap, Martin Riedmiller

The DeepMind Control Suite is a set of continuous control tasks with a standardised structure and interpretable rewards, intended to serve as performance benchmarks for reinforcement learning agents.

Continuous Control reinforcement-learning

Generative Temporal Models with Memory

no code implementations15 Feb 2017 Mevlana Gemici, Chia-Chun Hung, Adam Santoro, Greg Wayne, Shakir Mohamed, Danilo J. Rezende, David Amos, Timothy Lillicrap

We consider the general problem of modeling temporal data with long-range dependencies, wherein new observations are fully or partially predictable based on temporally-distant, past observations.

Variational Inference

Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic

2 code implementations7 Nov 2016 Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E. Turner, Sergey Levine

We analyze the connection between Q-Prop and existing model-free algorithms, and use control variate theory to derive two variants of Q-Prop with conservative and aggressive adaptation.

Continuous Control Policy Gradient Methods +1

Deep Reinforcement Learning for Robotic Manipulation with Asynchronous Off-Policy Updates

no code implementations3 Oct 2016 Shixiang Gu, Ethan Holly, Timothy Lillicrap, Sergey Levine

In this paper, we demonstrate that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots.

reinforcement-learning

One-shot Learning with Memory-Augmented Neural Networks

11 code implementations19 May 2016 Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, Timothy Lillicrap

Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of "one-shot learning."

One-Shot Learning

Continuous Deep Q-Learning with Model-based Acceleration

8 code implementations2 Mar 2016 Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, Sergey Levine

In this paper, we explore algorithms and representations to reduce the sample complexity of deep reinforcement learning for continuous control tasks.

Continuous Control Q-Learning +1

Deep Reinforcement Learning in Large Discrete Action Spaces

2 code implementations24 Dec 2015 Gabriel Dulac-Arnold, Richard Evans, Hado van Hasselt, Peter Sunehag, Timothy Lillicrap, Jonathan Hunt, Timothy Mann, Theophane Weber, Thomas Degris, Ben Coppin

Being able to reason in an environment with a large number of discrete actions is essential to bringing reinforcement learning to a larger class of problems.

Recommendation Systems reinforcement-learning

Learning Continuous Control Policies by Stochastic Value Gradients

1 code implementation NeurIPS 2015 Nicolas Heess, Greg Wayne, David Silver, Timothy Lillicrap, Yuval Tassa, Tom Erez

One of these variants, SVG(1), shows the effectiveness of learning models, value functions, and policies simultaneously in continuous domains.

Continuous Control

Cannot find the paper you are looking for? You can Submit a new open access paper.