Search Results for author: James Kirkpatrick

Found 6 papers, 4 papers with code

Very Deep Graph Neural Networks Via Noise Regularisation

1 code implementation15 Jun 2021 Jonathan Godwin, Michael Schaarschmidt, Alexander Gaunt, Alvaro Sanchez-Gonzalez, Yulia Rubanova, Petar Veličković, James Kirkpatrick, Peter Battaglia

Graph Neural Networks (GNNs) perform learned message passing over an input graph, but conventional wisdom says performing more than handful of steps makes training difficult and does not yield improved performance.

Graph Property Prediction Molecular Property Prediction +1

Distral: Robust Multitask Reinforcement Learning

no code implementations NeurIPS 2017 Yee Whye Teh, Victor Bapst, Wojciech Marian Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, Razvan Pascanu

Moreover, the proposed learning process is more robust and more stable---attributes that are critical in deep reinforcement learning.

Transfer Learning

The Forget-me-not Process

no code implementations NeurIPS 2016 Kieran Milan, Joel Veness, James Kirkpatrick, Michael Bowling, Anna Koop, Demis Hassabis

We introduce the Forget-me-not Process, an efficient, non-parametric meta-algorithm for online probabilistic sequence prediction for piecewise stationary, repeating sources.

Progressive Neural Networks

5 code implementations15 Jun 2016 Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell

Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence.

Continual Learning

Policy Distillation

1 code implementation19 Nov 2015 Andrei A. Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirkpatrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, Raia Hadsell

Policies for complex visual tasks have been successfully learned with deep reinforcement learning, using an approach called deep Q-networks (DQN), but relatively large (task-specific) networks and extensive training are needed to achieve good performance.

Cannot find the paper you are looking for? You can Submit a new open access paper.