Search Results for author: Charles Blundell

Found 49 papers, 23 papers with code

The CLRS Algorithmic Reasoning Benchmark

1 code implementation31 May 2022 Petar Veličković, Adrià Puigdomènech Badia, David Budden, Razvan Pascanu, Andrea Banino, Misha Dashevskiy, Raia Hadsell, Charles Blundell

Learning representations of algorithms is an emerging area of machine learning, seeking to bridge concepts from neural networks with classical algorithms.

Learning to Execute

Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?

no code implementations13 Jan 2022 Nenad Tomasev, Ioana Bica, Brian McWilliams, Lars Buesing, Razvan Pascanu, Charles Blundell, Jovana Mitrovic

Despite recent progress made by self-supervised methods in representation learning with residual networks, they still underperform supervised learning on the ImageNet classification benchmark, limiting their applicability in performance-critical settings.

Representation Learning Self-Supervised Image Classification +2

Normalizing flows for atomic solids

1 code implementation16 Nov 2021 Peter Wirnsberger, George Papamakarios, Borja Ibarz, Sébastien Racanière, Andrew J. Ballard, Alexander Pritzel, Charles Blundell

We present a machine-learning approach, based on normalizing flows, for modelling atomic solids.

PonderNet: Learning to Ponder

4 code implementations ICML Workshop AutoML 2021 Andrea Banino, Jan Balaguer, Charles Blundell

In standard neural networks the amount of computation used grows with the size of the inputs, but not with the complexity of the problem being learnt.

Question Answering

Neural Algorithmic Reasoning

no code implementations6 May 2021 Petar Veličković, Charles Blundell

Algorithms have been fundamental to recent global technological advances and, in particular, they have been the cornerstone of technical advances in one field rapidly being applied to another.

Persistent Message Passing

no code implementations ICLR Workshop GTRL 2021 Heiko Strathmann, Mohammadamin Barekatain, Charles Blundell, Petar Veličković

Graph neural networks (GNNs) are a powerful inductive bias for modelling algorithmic reasoning procedures and data structures.

Inductive Bias

Beyond Fine-Tuning: Transferring Behavior in Reinforcement Learning

no code implementations24 Feb 2021 Víctor Campos, Pablo Sprechmann, Steven Hansen, Andre Barreto, Steven Kapturowski, Alex Vitvitskyi, Adrià Puigdomènech Badia, Charles Blundell

We introduce Behavior Transfer (BT), a technique that leverages pre-trained policies for exploration and that is complementary to transferring neural network weights.

reinforcement-learning Unsupervised Pre-training

Factorizing Declarative and Procedural Knowledge in Structured, Dynamical Environments

no code implementations ICLR 2021 Anirudh Goyal, Alex Lamb, Phanideep Gampa, Philippe Beaudoin, Charles Blundell, Sergey Levine, Yoshua Bengio, Michael Curtis Mozer

To use a video game as an illustration, two enemies of the same type will share schemata but will have separate object files to encode their distinct state (e. g., health, position).

Representation Learning via Invariant Causal Mechanisms

no code implementations15 Oct 2020 Jovana Mitrovic, Brian McWilliams, Jacob Walker, Lars Buesing, Charles Blundell

Self-supervised learning has emerged as a strategy to reduce the reliance on costly supervised signal by pretraining representations only using unlabeled data.

Contrastive Learning Out-of-Distribution Generalization +3

Object Files and Schemata: Factorizing Declarative and Procedural Knowledge in Dynamical Systems

no code implementations29 Jun 2020 Anirudh Goyal, Alex Lamb, Phanideep Gampa, Philippe Beaudoin, Sergey Levine, Charles Blundell, Yoshua Bengio, Michael Mozer

To use a video game as an illustration, two enemies of the same type will share schemata but will have separate object files to encode their distinct state (e. g., health, position).

Pointer Graph Networks

no code implementations NeurIPS 2020 Petar Veličković, Lars Buesing, Matthew C. Overlan, Razvan Pascanu, Oriol Vinyals, Charles Blundell

This static input structure is often informed purely by insight of the machine learning practitioner, and might not be optimal for the actual task the GNN is solving.

Never Give Up: Learning Directed Exploration Strategies

3 code implementations ICLR 2020 Adrià Puigdomènech Badia, Pablo Sprechmann, Alex Vitvitskyi, Daniel Guo, Bilal Piot, Steven Kapturowski, Olivier Tieleman, Martín Arjovsky, Alexander Pritzel, Andew Bolt, Charles Blundell

Our method doubles the performance of the base agent in all hard exploration in the Atari-57 suite while maintaining a very high score across the remaining games, obtaining a median human normalised score of 1344. 0%.

Atari Games

Targeted free energy estimation via learned mappings

no code implementations12 Feb 2020 Peter Wirnsberger, Andrew J. Ballard, George Papamakarios, Stuart Abercrombie, Sébastien Racanière, Alexander Pritzel, Danilo Jimenez Rezende, Charles Blundell

Here, we cast Targeted FEP as a machine learning problem in which the mapping is parameterized as a neural network that is optimized so as to increase overlap.

Shaping representations through communication: community size effect in artificial learning systems

no code implementations12 Dec 2019 Olivier Tieleman, Angeliki Lazaridou, Shibl Mourad, Charles Blundell, Doina Precup

Motivated by theories of language and communication that explain why communities with large numbers of speakers have, on average, simpler languages with more regularity, we cast the representation learning problem in terms of learning to communicate.

Representation Learning

Generalization of Reinforcement Learners with Working and Episodic Memory

1 code implementation NeurIPS 2019 Meire Fortunato, Melissa Tan, Ryan Faulkner, Steven Hansen, Adrià Puigdomènech Badia, Gavin Buttimore, Charlie Deck, Joel Z. Leibo, Charles Blundell

In this paper, we aim to develop a comprehensive methodology to test different kinds of memory in an agent and assess how well the agent can apply what it learns in training to a holdout set that differs from the training set along dimensions that we suggest are relevant for evaluating memory-specific generalization.

Neural Execution of Graph Algorithms

no code implementations ICLR 2020 Petar Veličković, Rex Ying, Matilde Padovano, Raia Hadsell, Charles Blundell

Graph Neural Networks (GNNs) are a powerful representational tool for solving problems on graph-structured inputs.

Infinitely Deep Infinite-Width Networks

no code implementations ICLR 2019 Jovana Mitrovic, Peter Wirnsberger, Charles Blundell, Dino Sejdinovic, Yee Whye Teh

Infinite-width neural networks have been extensively used to study the theoretical properties underlying the extraordinary empirical success of standard, finite-width neural networks.

Fast deep reinforcement learning using online adjustments from the past

1 code implementation NeurIPS 2018 Steven Hansen, Pablo Sprechmann, Alexander Pritzel, André Barreto, Charles Blundell

We propose Ephemeral Value Adjusments (EVA): a means of allowing deep reinforcement learning agents to rapidly adapt to experience in their replay buffer.

Atari Games reinforcement-learning

Been There, Done That: Meta-Learning with Episodic Recall

1 code implementation ICML 2018 Samuel Ritter, Jane. X. Wang, Zeb Kurth-Nelson, Siddhant M. Jayakumar, Charles Blundell, Razvan Pascanu, Matthew Botvinick

Meta-learning agents excel at rapidly learning new tasks from open-ended task distributions; yet, they forget what they learn about each task as soon as the next begins.


Pushing the bounds of dropout

1 code implementation ICLR 2019 Gábor Melis, Charles Blundell, Tomáš Kočiský, Karl Moritz Hermann, Chris Dyer, Phil Blunsom

We show that dropout training is best understood as performing MAP estimation concurrently for a family of conditional models whose objectives are themselves lower bounded by the original dropout objective.

Language Modelling

Revisiting Bayes by Backprop

no code implementations ICLR 2018 Meire Fortunato, Charles Blundell, Oriol Vinyals

We also empirically demonstrate how Bayesian RNNs are superior to traditional RNNs on a language modelling benchmark and an image captioning task, as well as showing how each of these methods improve our model over a variety of other schemes for training them.

Image Captioning Language Modelling

Noisy Networks for Exploration

14 code implementations ICLR 2018 Meire Fortunato, Mohammad Gheshlaghi Azar, Bilal Piot, Jacob Menick, Ian Osband, Alex Graves, Vlad Mnih, Remi Munos, Demis Hassabis, Olivier Pietquin, Charles Blundell, Shane Legg

We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent's policy can be used to aid efficient exploration.

Atari Games Efficient Exploration +1

Bayesian Recurrent Neural Networks

3 code implementations10 Apr 2017 Meire Fortunato, Charles Blundell, Oriol Vinyals

We also empirically demonstrate how Bayesian RNNs are superior to traditional RNNs on a language modelling benchmark and an image captioning task, as well as showing how each of these methods improve our model over a variety of other schemes for training them.

Image Captioning Language Modelling

Learning Deep Nearest Neighbor Representations Using Differentiable Boundary Trees

1 code implementation28 Feb 2017 Daniel Zoran, Balaji Lakshminarayanan, Charles Blundell

We introduce a new method called differentiable boundary tree which allows for learning deep kNN representations.

PathNet: Evolution Channels Gradient Descent in Super Neural Networks

1 code implementation30 Jan 2017 Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A. Rusu, Alexander Pritzel, Daan Wierstra

It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks.

Continual Learning reinforcement-learning +1

Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles

20 code implementations NeurIPS 2017 Balaji Lakshminarayanan, Alexander Pritzel, Charles Blundell

Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks.

Learning to reinforcement learn

7 code implementations17 Nov 2016 Jane. X. Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z. Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, Matt Botvinick

We unpack these points in a series of seven proof-of-concept experiments, each of which examines a key aspect of deep meta-RL.

Meta-Learning Meta Reinforcement Learning +1

Early Visual Concept Learning with Unsupervised Deep Learning

1 code implementation17 Jun 2016 Irina Higgins, Loic Matthey, Xavier Glorot, Arka Pal, Benigno Uria, Charles Blundell, Shakir Mohamed, Alexander Lerchner

Automated discovery of early visual concepts from raw image data is a major open challenge in AI research.

Model-Free Episodic Control

3 code implementations14 Jun 2016 Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z. Leibo, Jack Rae, Daan Wierstra, Demis Hassabis

State of the art deep reinforcement learning algorithms take many millions of interactions to attain human-level performance.

Decision Making Hippocampus +1

Distributed Bayesian Learning with Stochastic Natural-gradient Expectation Propagation and the Posterior Server

no code implementations31 Dec 2015 Leonard Hasenclever, Stefan Webb, Thibaut Lienart, Sebastian Vollmer, Balaji Lakshminarayanan, Charles Blundell, Yee Whye Teh

The posterior server allows scalable and robust Bayesian learning in cases where a data set is stored in a distributed manner across a cluster, with each compute node containing a disjoint subset of data.

Variational Inference

Weight Uncertainty in Neural Networks

33 code implementations20 May 2015 Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, Daan Wierstra

We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop.

Bayesian Inference General Classification +1

Bayesian Hierarchical Community Discovery

no code implementations NeurIPS 2013 Charles Blundell, Yee Whye Teh

We propose an efficient Bayesian nonparametric model for discovering hierarchical community structure in social networks.

Model Selection

Deep AutoRegressive Networks

no code implementations31 Oct 2013 Karol Gregor, Ivo Danihelka, andriy mnih, Charles Blundell, Daan Wierstra

We introduce a deep, generative autoencoder capable of learning hierarchies of distributed representations from data.

Atari Games

Modelling Reciprocating Relationships with Hawkes Processes

no code implementations NeurIPS 2012 Charles Blundell, Jeff Beck, Katherine A. Heller

We present a Bayesian nonparametric model that discovers implicit social structure from interaction time-series data.

Time Series

Modelling Genetic Variations using Fragmentation-Coagulation Processes

no code implementations NeurIPS 2011 Yee W. Teh, Charles Blundell, Lloyd Elliott

We propose a novel class of Bayesian nonparametric models for sequential data called fragmentation-coagulation processes (FCPs).


Cannot find the paper you are looking for? You can Submit a new open access paper.