Search Results for author: Florent Delgrange

Found 6 papers, 2 papers with code

Synthesis of Hierarchical Controllers Based on Deep Reinforcement Learning Policies

no code implementations21 Feb 2024 Florent Delgrange, Guy Avni, Anna Lukina, Christian Schilling, Ann Nowé, Guillermo A. Pérez

We propose a novel approach to the problem of controller design for environments modeled as Markov decision processes (MDPs).

reinforcement-learning

Wasserstein Auto-encoded MDPs: Formal Verification of Efficiently Distilled RL Policies with Many-sided Guarantees

1 code implementation22 Mar 2023 Florent Delgrange, Ann Nowé, Guillermo A. Pérez

Our approach yields bisimulation guarantees while learning the distilled policy, allowing concrete optimization of the abstraction and representation model quality.

The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models

no code implementations6 Mar 2023 Raphael Avalos, Florent Delgrange, Ann Nowé, Guillermo A. Pérez, Diederik M. Roijers

Maintaining a probability distribution that models the belief over what the true state is can be used as a sufficient statistic of the history, but its computation requires access to the model of the environment and is often intractable.

Distillation of RL Policies with Formal Guarantees via Variational Abstraction of Markov Decision Processes (Technical Report)

1 code implementation17 Dec 2021 Florent Delgrange, Ann Nowé, Guillermo A. Pérez

Finally, we show how one can use a policy obtained via state-of-the-art RL to efficiently train a variational autoencoder that yields a discrete latent model with provably approximately correct bisimulation guarantees.

Reinforcement Learning (RL)

Life is Random, Time is Not: Markov Decision Processes with Window Objectives

no code implementations11 Jan 2019 Thomas Brihaye, Florent Delgrange, Youssouf Oualhadj, Mickael Randour

The window mechanism was introduced by Chatterjee et al. to strengthen classical game objectives with time bounds.

Cannot find the paper you are looking for? You can Submit a new open access paper.