Search Results for author: Timothy P. Lillicrap

Found 17 papers, 8 papers with code

Towards Biologically Plausible Convolutional Networks

1 code implementation NeurIPS 2021 Roman Pogodin, Yash Mehta, Timothy P. Lillicrap, Peter E. Latham

This requires the network to pause occasionally for a sleep-like phase of "weight sharing".

Compressive Transformers for Long-Range Sequence Modelling

6 code implementations ICLR 2020 Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Timothy P. Lillicrap

We present the Compressive Transformer, an attentive sequence model which compresses past memories for long-range sequence learning.

Language Modelling

Meta-Learning Deep Energy-Based Memory Models

no code implementations ICLR 2020 Sergey Bartunov, Jack W. Rae, Simon Osindero, Timothy P. Lillicrap

We study the problem of learning associative memory -- a system which is able to retrieve a remembered pattern based on its distorted or incomplete version.

Meta-Learning Retrieval

Automated curricula through setter-solver interactions

no code implementations27 Sep 2019 Sebastien Racaniere, Andrew K. Lampinen, Adam Santoro, David P. Reichert, Vlad Firoiu, Timothy P. Lillicrap

We demonstrate the success of our approach in rich but sparsely rewarding 2D and 3D environments, where an agent is tasked to achieve a single goal selected from a set of possible goals that varies between episodes, and identify challenges for future work.

What does it mean to understand a neural network?

no code implementations15 Jul 2019 Timothy P. Lillicrap, Konrad P. Kording

In analogy, we conjecture that rules for development and learning in brains may be far easier to understand than their resulting properties.

Meta-Learning Neural Bloom Filters

no code implementations ICLR 2019 Jack W. Rae, Sergey Bartunov, Timothy P. Lillicrap

There has been a recent trend in training neural networks to replace data structures that have been crafted by hand, with an aim for faster execution, better accuracy, or greater compression.

Meta-Learning

Composing Entropic Policies using Divergence Correction

no code implementations5 Dec 2018 Jonathan J. Hunt, Andre Barreto, Timothy P. Lillicrap, Nicolas Heess

Composing previously mastered skills to solve novel tasks promises dramatic improvements in the data efficiency of reinforcement learning.

Continuous Control Reinforcement Learning (RL)

Experience Replay for Continual Learning

no code implementations ICLR 2019 David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy P. Lillicrap, Greg Wayne

We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence.

Continual Learning

Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes

no code implementations NeurIPS 2016 Jack W. Rae, Jonathan J. Hunt, Tim Harley, Ivo Danihelka, Andrew Senior, Greg Wayne, Alex Graves, Timothy P. Lillicrap

SAM learns with comparable data efficiency to existing models on a range of synthetic tasks and one-shot Omniglot character recognition, and can scale to tasks requiring $100,\! 000$s of time steps and memories.

Ranked #6 on Question Answering on bAbi (Mean Error Rate metric)

Language Modelling Machine Translation +2

Towards deep learning with segregated dendrites

1 code implementation1 Oct 2016 Jordan Guergiuev, Timothy P. Lillicrap, Blake A. Richards

Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology.

Asynchronous Methods for Deep Reinforcement Learning

70 code implementations4 Feb 2016 Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu

We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers.

Atari Games reinforcement-learning +1

Memory-based control with recurrent neural networks

3 code implementations14 Dec 2015 Nicolas Heess, Jonathan J. Hunt, Timothy P. Lillicrap, David Silver

Partially observed control problems are a challenging aspect of reinforcement learning.

Continuous Control

Random feedback weights support learning in deep neural networks

1 code implementation2 Nov 2014 Timothy P. Lillicrap, Daniel Cownden, Douglas B. Tweed, Colin J. Akerman

In machine learning, the backpropagation algorithm assigns blame to a neuron by computing exactly how it contributed to an error.

Cannot find the paper you are looking for? You can Submit a new open access paper.