Search Results for author: Jean-François Raskin

Found 10 papers, 1 papers with code

Lifted Model Checking for Relational MDPs

no code implementations22 Jun 2021 Wen-Chi Yang, Jean-François Raskin, Luc De Raedt

We present pCTL-REBEL, a lifted model checking approach for verifying pCTL properties of relational MDPs.

Model-based Reinforcement Learning reinforcement-learning

Active Learning of Sequential Transducers with Side Information about the Domain

no code implementations23 Apr 2021 Raphaël Berthon, Adrien Boiret, Guillermo A. Perez, Jean-François Raskin

We show that there exists an algorithm using string equation solvers that uses this knowledge to learn subsequential string transducers with a better guarantee on the required number of equivalence queries than classical active learning.

Active Learning

Stackelberg-Pareto Synthesis (Full Version)

no code implementations17 Feb 2021 Véronique Bruyère, Jean-François Raskin, Clément Tamines

In this paper, we study the framework of two-player Stackelberg games played on graphs in which Player 0 announces a strategy and Player 1 responds rationally with a strategy that is an optimal response.

Computer Science and Game Theory

Online Learning of Non-Markovian Reward Models

no code implementations26 Sep 2020 Gavin Rens, Jean-François Raskin, Raphaël Reynouad, Giuseppe Marra

In our formal setting, we consider a Markov decision process (MDP) that models the dynamics of the environment in which the agent evolves and a Mealy machine synchronized with this MDP to formalize the non-Markovian reward function.

Active Learning online learning

Safe Learning for Near Optimal Scheduling

no code implementations19 May 2020 Damien Busatto-Gaston, Debraj Chakraborty, Shibashis Guha, Guillermo A. Pérez, Jean-François Raskin

In this paper, we investigate the combination of synthesis, model-based learning, and online sampling techniques to obtain safe and near-optimal schedulers for a preemptible task scheduling problem.

Q-Learning

Mixing Probabilistic and non-Probabilistic Objectives in Markov Decision Processes

no code implementations28 Apr 2020 Raphaël Berthon, Shibashis Guha, Jean-François Raskin

In this paper, we consider algorithms to decide the existence of strategies in MDPs for Boolean combinations of objectives.

Learning Non-Markovian Reward Models in MDPs

no code implementations25 Jan 2020 Gavin Rens, Jean-François Raskin

There are situations in which an agent should receive rewards only after having accomplished a series of previous tasks.

Active Learning

Learning-Based Mean-Payoff Optimization in an Unknown MDP under Omega-Regular Constraints

no code implementations24 Apr 2018 Jan Křetínský, Guillermo A. Pérez, Jean-François Raskin

Assuming the support of the unknown transition function and a lower bound on the minimal transition probability are known in advance, we show that in MDPs consisting of a single end component, two combinations of guarantees on the parity and mean-payoff objectives can be achieved depending on how much memory one is willing to use.

online learning

Threshold Constraints with Guarantees for Parity Objectives in Markov Decision Processes

no code implementations17 Feb 2017 Raphaël Berthon, Mickael Randour, Jean-François Raskin

We establish that, for all variants of this problem, deciding the existence of a strategy lies in ${\sf NP} \cap {\sf coNP}$, the same complexity class as classical parity games.

Optimizing Expectation with Guarantees in POMDPs (Technical Report)

1 code implementation26 Nov 2016 Krishnendu Chatterjee, Petr Novotný, Guillermo A. Pérez, Jean-François Raskin, Đorđe Žikelić

In this work we go beyond both the "expectation" and "threshold" approaches and consider a "guaranteed payoff optimization (GPO)" problem for POMDPs, where we are given a threshold $t$ and the objective is to find a policy $\sigma$ such that a) each possible outcome of $\sigma$ yields a discounted-sum payoff of at least $t$, and b) the expected discounted-sum payoff of $\sigma$ is optimal (or near-optimal) among all policies satisfying a).

Cannot find the paper you are looking for? You can Submit a new open access paper.