Search Results for author: Sergio Gómez Colmenarejo

Found 12 papers, 4 papers with code

TF-Replicator: Distributed Machine Learning for Researchers

1 code implementation1 Feb 2019 Peter Buchlovsky, David Budden, Dominik Grewe, Chris Jones, John Aslanides, Frederic Besse, Andy Brock, Aidan Clark, Sergio Gómez Colmenarejo, Aedan Pope, Fabio Viola, Dan Belov

We describe TF-Replicator, a framework for distributed machine learning designed for DeepMind researchers and implemented as an abstraction over TensorFlow.

BIG-bench Machine Learning Continuous Control +1

Learning Awareness Models

no code implementations ICLR 2018 Brandon Amos, Laurent Dinh, Serkan Cabi, Thomas Rothörl, Sergio Gómez Colmenarejo, Alistair Muldal, Tom Erez, Yuval Tassa, Nando de Freitas, Misha Denil

We show that models trained to predict proprioceptive information about the agent's body come to represent objects in the external world.

Programmable Agents

no code implementations20 Jun 2017 Misha Denil, Sergio Gómez Colmenarejo, Serkan Cabi, David Saxton, Nando de Freitas

We build deep RL agents that execute declarative programs expressed in formal language.

Parallel Multiscale Autoregressive Density Estimation

no code implementations ICML 2017 Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang, Dan Belov, Nando de Freitas

Our new PixelCNN model achieves competitive density estimation and orders of magnitude speedup - O(log N) sampling instead of O(N) - enabling the practical generation of 512x512 images.

Conditional Image Generation Density Estimation +2

One-Shot High-Fidelity Imitation: Training Large-Scale Deep Nets with RL

no code implementations ICLR 2019 Tom Le Paine, Sergio Gómez Colmenarejo, Ziyu Wang, Scott Reed, Yusuf Aytar, Tobias Pfaff, Matt W. Hoffman, Gabriel Barth-Maron, Serkan Cabi, David Budden, Nando de Freitas

MetaMimic can learn both (i) policies for high-fidelity one-shot imitation of diverse novel skills, and (ii) policies that enable the agent to solve tasks more efficiently than the demonstrators.

Addressing Extrapolation Error in Deep Offline Reinforcement Learning

no code implementations1 Jan 2021 Caglar Gulcehre, Sergio Gómez Colmenarejo, Ziyu Wang, Jakub Sygnowski, Thomas Paine, Konrad Zolna, Yutian Chen, Matthew Hoffman, Razvan Pascanu, Nando de Freitas

These errors can be compounded by bootstrapping when the function approximator overestimates, leading the value function to *grow unbounded*, thereby crippling learning.

Offline RL reinforcement-learning +1

Regularized Behavior Value Estimation

no code implementations17 Mar 2021 Caglar Gulcehre, Sergio Gómez Colmenarejo, Ziyu Wang, Jakub Sygnowski, Thomas Paine, Konrad Zolna, Yutian Chen, Matthew Hoffman, Razvan Pascanu, Nando de Freitas

Due to bootstrapping, these errors get amplified during training and can lead to divergence, thereby crippling learning.

Offline RL

Cannot find the paper you are looking for? You can Submit a new open access paper.