Search Results for author: Léonard Blier

Found 6 papers, 2 papers with code

The Description Length of Deep Learning Models

no code implementations NeurIPS 2018 Léonard Blier, Yann Ollivier

This might explain the relatively poor practical performance of variational methods in deep learning.

Learning with Random Learning Rates.

no code implementations27 Sep 2018 Léonard Blier, Pierre Wolinski, Yann Ollivier

Hyperparameter tuning is a bothersome step in the training of deep learning mod- els.

Learning with Random Learning Rates

1 code implementation2 Oct 2018 Léonard Blier, Pierre Wolinski, Yann Ollivier

Hyperparameter tuning is a bothersome step in the training of deep learning models.

Making Deep Q-learning methods robust to time discretization

1 code implementation28 Jan 2019 Corentin Tallec, Léonard Blier, Yann Ollivier

Despite remarkable successes, Deep Reinforcement Learning (DRL) is not robust to hyperparameterization, implementation details, or small environment changes (Henderson et al. 2017, Zhang et al. 2018).

Q-Learning

Learning Successor States and Goal-Dependent Values: A Mathematical Viewpoint

no code implementations18 Jan 2021 Léonard Blier, Corentin Tallec, Yann Ollivier

In reinforcement learning, temporal difference-based algorithms can be sample-inefficient: for instance, with sparse rewards, no learning occurs until a reward is observed.

Unbiased Methods for Multi-Goal Reinforcement Learning

no code implementations16 Jun 2021 Léonard Blier, Yann Ollivier

We introduce unbiased deep Q-learning and actor-critic algorithms that can handle such infinitely sparse rewards, and test them in toy environments.

Multi-Goal Reinforcement Learning Q-Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.