Search Results for author: Emmanuel Rachelson

Found 17 papers, 7 papers with code

Lipschitz Lifelong Reinforcement Learning

1 code implementation15 Jan 2020 Erwan Lecarpentier, David Abel, Kavosh Asadi, Yuu Jinnai, Emmanuel Rachelson, Michael L. Littman

We consider the problem of knowledge transfer when an agent is facing a series of Reinforcement Learning (RL) tasks.

reinforcement-learning Reinforcement Learning (RL) +1

When, where, and how to add new neurons to ANNs

1 code implementation17 Feb 2022 Kaitlin Maile, Emmanuel Rachelson, Hervé Luga, Dennis G. Wilson

Neurogenesis in ANNs is an understudied and difficult problem, even compared to other forms of structural learning like pruning.

Large Batch Experience Replay

1 code implementation4 Oct 2021 Thibault Lahire, Matthieu Geist, Emmanuel Rachelson

The optimal sampling distribution being intractable, we make several approximations providing good results in practice and introduce, among others, LaBER (Large Batch Experience Replay), an easy-to-code and efficient method for sampling the replay buffer.

Atari Games Reinforcement Learning (RL)

Disentanglement by Cyclic Reconstruction

1 code implementation24 Dec 2021 David Bertoin, Emmanuel Rachelson

This enables the isolation of task-specific information from both domains and a projection into a common representation.

Disentanglement Information Retrieval +2

Naive Bayes Classification for Subset Selection

1 code implementation19 Jul 2017 Luca Mossina, Emmanuel Rachelson

This article focuses on the question of learning how to automatically select a subset of items among a bigger set.

Classification General Classification +1

Open Loop Execution of Tree-Search Algorithms, extended version

no code implementations3 May 2018 Erwan Lecarpentier, Guillaume Infantes, Charles Lesire, Emmanuel Rachelson

In the context of tree-search stochastic planning algorithms where a generative model is available, we consider on-line planning algorithms building trees in order to recommend an action.

Empirical evaluation of a Q-Learning Algorithm for Model-free Autonomous Soaring

no code implementations18 Jul 2017 Erwan Lecarpentier, Sebastian Rapp, Marc Melo, Emmanuel Rachelson

Autonomous unpowered flight is a challenge for control and guidance systems: all the energy the aircraft might use during flight has to be harvested directly from the atmosphere.

Q-Learning

Learning to Handle Parameter Perturbations in Combinatorial Optimization: an Application to Facility Location

no code implementations12 Jul 2019 Andrea Lodi, Luca Mossina, Emmanuel Rachelson

Although presented through the application to the facility location problem, the approach developed here is general and explores a new perspective on the exploitation of past experience in combinatorial optimization.

Combinatorial Optimization

Disentangled cyclic reconstruction for domain adaptation

no code implementations1 Jan 2021 David Bertoin, Emmanuel Rachelson

The domain adaptation problem involves learning a unique classification or regres-sion model capable of performing on both a source and a target domain.

Disentanglement Unsupervised Domain Adaptation

On Neural Consolidation for Transfer in Reinforcement Learning

no code implementations5 Oct 2022 Valentin Guillet, Dennis G. Wilson, Carlos Aguilar-Melchor, Emmanuel Rachelson

Although transfer learning is considered to be a milestone in deep reinforcement learning, the mechanisms behind it are still poorly understood.

reinforcement-learning Reinforcement Learning (RL) +1

Neural Distillation as a State Representation Bottleneck in Reinforcement Learning

no code implementations5 Oct 2022 Valentin Guillet, Dennis G. Wilson, Carlos Aguilar-Melchor, Emmanuel Rachelson

Learning a good state representation is a critical skill when dealing with multiple tasks in Reinforcement Learning as it allows for transfer and better generalization between tasks.

reinforcement-learning Reinforcement Learning (RL)

Curiosity creates Diversity in Policy Search

no code implementations7 Dec 2022 Paul-Antoine Le Tolguenec, Emmanuel Rachelson, Yann Besse, Dennis G. Wilson

In this work, we use a recently proposed definition of intrinsic motivation, Curiosity, in an evolutionary policy search method.

Cannot find the paper you are looking for? You can Submit a new open access paper.