Search Results for author: Alexander Pritzel

Found 17 papers, 12 papers with code

MultiScale MeshGraphNets

no code implementations2 Oct 2022 Meire Fortunato, Tobias Pfaff, Peter Wirnsberger, Alexander Pritzel, Peter Battaglia

In recent years, there has been a growing interest in using machine learning to overcome the high cost of numerical simulation, with some learned models achieving impressive speed-ups over classical solvers whilst maintaining accuracy.

Normalizing flows for atomic solids

1 code implementation16 Nov 2021 Peter Wirnsberger, George Papamakarios, Borja Ibarz, Sébastien Racanière, Andrew J. Ballard, Alexander Pritzel, Charles Blundell

We present a machine-learning approach, based on normalizing flows, for modelling atomic solids.

Never Give Up: Learning Directed Exploration Strategies

3 code implementations ICLR 2020 Adrià Puigdomènech Badia, Pablo Sprechmann, Alex Vitvitskyi, Daniel Guo, Bilal Piot, Steven Kapturowski, Olivier Tieleman, Martín Arjovsky, Alexander Pritzel, Andew Bolt, Charles Blundell

Our method doubles the performance of the base agent in all hard exploration in the Atari-57 suite while maintaining a very high score across the remaining games, obtaining a median human normalised score of 1344. 0%.

Atari Games

Targeted free energy estimation via learned mappings

no code implementations12 Feb 2020 Peter Wirnsberger, Andrew J. Ballard, George Papamakarios, Stuart Abercrombie, Sébastien Racanière, Alexander Pritzel, Danilo Jimenez Rezende, Charles Blundell

Here, we cast Targeted FEP as a machine learning problem in which the mapping is parameterized as a neural network that is optimized so as to increase overlap.

Fast deep reinforcement learning using online adjustments from the past

2 code implementations NeurIPS 2018 Steven Hansen, Pablo Sprechmann, Alexander Pritzel, André Barreto, Charles Blundell

We propose Ephemeral Value Adjusments (EVA): a means of allowing deep reinforcement learning agents to rapidly adapt to experience in their replay buffer.

Atari Games reinforcement-learning +2

Meta-Learning by the Baldwin Effect

no code implementations6 Jun 2018 Chrisantha Thomas Fernando, Jakub Sygnowski, Simon Osindero, Jane Wang, Tom Schaul, Denis Teplyashin, Pablo Sprechmann, Alexander Pritzel, Andrei A. Rusu

The scope of the Baldwin effect was recently called into question by two papers that closely examined the seminal work of Hinton and Nowlan.

Meta-Learning

Generative Temporal Models with Spatial Memory for Partially Observed Environments

no code implementations ICML 2018 Marco Fraccaro, Danilo Jimenez Rezende, Yori Zwols, Alexander Pritzel, S. M. Ali Eslami, Fabio Viola

In model-based reinforcement learning, generative and temporal models of environments can be leveraged to boost agent performance, either by tuning the agent's representations during training or via use as part of an explicit planning mechanism.

Model-based Reinforcement Learning

PathNet: Evolution Channels Gradient Descent in Super Neural Networks

1 code implementation30 Jan 2017 Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A. Rusu, Alexander Pritzel, Daan Wierstra

It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks.

Continual Learning reinforcement-learning +2

Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles

25 code implementations NeurIPS 2017 Balaji Lakshminarayanan, Alexander Pritzel, Charles Blundell

Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks.

Model-Free Episodic Control

3 code implementations14 Jun 2016 Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z. Leibo, Jack Rae, Daan Wierstra, Demis Hassabis

State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance.

Decision Making Hippocampus +2

Cannot find the paper you are looking for? You can Submit a new open access paper.