Search Results for author: Léonard Hussenot

Found 12 papers, 5 papers with code

vec2text with Round-Trip Translations

no code implementations14 Sep 2022 Geoffrey Cideron, Sertan Girgin, Anton Raichuk, Olivier Pietquin, Olivier Bachem, Léonard Hussenot

We propose a simple data augmentation technique based on round-trip translations and show in extensive experiments that the resulting vec2text model surprisingly leads to vector spaces that fulfill our four desired properties and that this model strongly outperforms both standard and denoising auto-encoders.

Data Augmentation Denoising +1

Learning Energy Networks with Generalized Fenchel-Young Losses

no code implementations19 May 2022 Mathieu Blondel, Felipe Llinares-López, Robert Dadashi, Léonard Hussenot, Matthieu Geist

To learn the parameters of the energy function, the solution to that optimization problem is typically fed into a loss function.

Imitation Learning

RLDS: an Ecosystem to Generate, Share and Use Datasets in Reinforcement Learning

1 code implementation4 Nov 2021 Sabela Ramos, Sertan Girgin, Léonard Hussenot, Damien Vincent, Hanna Yakubovich, Daniel Toyama, Anita Gergely, Piotr Stanczyk, Raphael Marinier, Jeremiah Harmsen, Olivier Pietquin, Nikola Momchev

We introduce RLDS (Reinforcement Learning Datasets), an ecosystem for recording, replaying, manipulating, annotating and sharing data in the context of Sequential Decision Making (SDM) including Reinforcement Learning (RL), Learning from Demonstrations, Offline RL or Imitation Learning.

Imitation Learning Offline RL +1

What Matters for Adversarial Imitation Learning?

no code implementations NeurIPS 2021 Manu Orsini, Anton Raichuk, Léonard Hussenot, Damien Vincent, Robert Dadashi, Sertan Girgin, Matthieu Geist, Olivier Bachem, Olivier Pietquin, Marcin Andrychowicz

To tackle this issue, we implement more than 50 of these choices in a generic adversarial imitation learning framework and investigate their impacts in a large-scale study (>500k trained agents) with both synthetic and human-generated demonstrations.

Continuous Control Imitation Learning

Offline Reinforcement Learning with Pseudometric Learning

no code implementations ICLR Workshop SSL-RL 2021 Robert Dadashi, Shideh Rezaeifar, Nino Vieillard, Léonard Hussenot, Olivier Pietquin, Matthieu Geist

In the presence of function approximation, and under the assumption of limited coverage of the state-action space of the environment, it is necessary to enforce the policy to visit state-action pairs close to the support of logged transitions.


Show me the Way: Intrinsic Motivation from Demonstrations

no code implementations23 Jun 2020 Léonard Hussenot, Robert Dadashi, Matthieu Geist, Olivier Pietquin

Using an inverse RL approach, we show that complex exploration behaviors, reflecting different motivations, can be learnt and efficiently used by RL agents to solve tasks for which exhaustive exploration is prohibitive.

Decision Making Experimental Design

CopyCAT: Taking Control of Neural Policies with Constant Attacks

no code implementations29 May 2019 Léonard Hussenot, Matthieu Geist, Olivier Pietquin

In this setting, the adversary cannot directly modify the agent's state -- its representation of the environment -- but can only attack the agent's observation -- its perception of the environment.

Atari Games reinforcement-learning

Cannot find the paper you are looking for? You can Submit a new open access paper.