Search Results for author: Víctor Campos

Found 5 papers, 2 papers with code

Human-level Atari 200x faster

1 code implementation15 Sep 2022 Steven Kapturowski, Víctor Campos, Ray Jiang, Nemanja Rakićević, Hado van Hasselt, Charles Blundell, Adrià Puigdomènech Badia

The task of building general agents that perform well over a wide range of tasks has been an importantgoal in reinforcement learning since its inception.

Beyond Fine-Tuning: Transferring Behavior in Reinforcement Learning

no code implementations24 Feb 2021 Víctor Campos, Pablo Sprechmann, Steven Hansen, Andre Barreto, Steven Kapturowski, Alex Vitvitskyi, Adrià Puigdomènech Badia, Charles Blundell

We introduce Behavior Transfer (BT), a technique that leverages pre-trained policies for exploration and that is complementary to transferring neural network weights.

reinforcement-learning Reinforcement Learning +2

Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills

1 code implementation ICML 2020 Víctor Campos, Alexander Trott, Caiming Xiong, Richard Socher, Xavier Giro-i-Nieto, Jordi Torres

We perform an extensive evaluation of skill discovery methods on controlled environments and show that EDL offers significant advantages, such as overcoming the coverage problem, reducing the dependence of learned skills on the initial state, and allowing the user to define a prior over which behaviors should be learned.

Reinforcement Learning

Importance Weighted Evolution Strategies

no code implementations12 Nov 2018 Víctor Campos, Xavier Giro-i-Nieto, Jordi Torres

Evolution Strategies (ES) emerged as a scalable alternative to popular Reinforcement Learning (RL) techniques, providing an almost perfect speedup when distributed across hundreds of CPU cores thanks to a reduced communication overhead.

reinforcement-learning Reinforcement Learning +1

Comparing Fixed and Adaptive Computation Time for Recurrent Neural Networks

no code implementations21 Mar 2018 Daniel Fojo, Víctor Campos, Xavier Giro-i-Nieto

Adaptive Computation Time for Recurrent Neural Networks (ACT) is one of the most promising architectures for variable computation.

Cannot find the paper you are looking for? You can Submit a new open access paper.