no code implementations • 8 Apr 2024 • Tobias Meggendorfer, Maximilian Weininger, Patrick Wienhöft
Markov decision processes (MDPs) are a fundamental model for decision making under uncertainty.
no code implementations • 14 Mar 2024 • Tomáš Brázdil, Krishnendu Chatterjee, Martin Chmelik, Vojtěch Forejt, Jan Křetínský, Marta Kwiatkowska, Tobias Meggendorfer, David Parker, Mateusz Ujma
The presented framework focuses on probabilistic reachability, which is a core problem in verification, and is instantiated in two distinct scenarios.
no code implementations • 27 Jul 2023 • Guy Avni, Tobias Meggendorfer, Suman Sadhukhan, Josef Tkadlec, Đorđe Žikelić
We consider, for the first time, {\em poorman discrete-bidding} in which the granularity of the bids is restricted and the higher bid is paid to the bank.
1 code implementation • 24 May 2023 • Jan Kretinsky, Tobias Meggendorfer, Maximilian Prokop, Sabine Rieder
Firstly, checking whether a guessed strategy is winning is easier than constructing one.
no code implementations • 19 Apr 2023 • Jan Křetínský, Tobias Meggendorfer, Maximilian Weininger
In this paper, we provide the first stopping criteria for VI on SG with total reward and mean payoff, yielding the first anytime algorithms in these settings.
no code implementations • 18 Jan 2023 • Tobias Meggendorfer
A classical problem for Markov chains is determining their stationary (or steady-state) distribution.
no code implementations • 3 Mar 2022 • Tobias Meggendorfer
We treat the problem of risk-aware control for stochastic shortest path (SSP) on Markov decision processes (MDP).
no code implementations • 10 Aug 2020 • Kush Grover, Jan Křetínský, Tobias Meggendorfer, Maximilian Weininger
As this problem is undecidable in general, assumptions on the MDP are necessary.
no code implementations • 17 Jun 2019 • Jan Křetínský, Tobias Meggendorfer
We introduce a framework for approximate analysis of Markov decision processes (MDP) with bounded-, unbounded-, and infinite-horizon properties.