no code implementations • 15 Jan 2024 • Alexander Bork, Debraj Chakraborty, Kush Grover, Jan Kretinsky, Stefanie Mohr
Strategies for partially observable Markov decision processes (POMDP) typically require memory.
no code implementations • 21 Jan 2022 • Alexander Bork, Joost-Pieter Katoen, Tim Quatmann
We consider the problem: is the optimal expected total reward to reach a goal state in a partially observable Markov decision process (POMDP) below a given threshold?
1 code implementation • 30 Jun 2020 • Alexander Bork, Sebastian Junges, Joost-Pieter Katoen, Tim Quatmann
This paper considers the verification problem for partially observable MDPs, in which the policies make their decisions based on (the history of) the observations emitted by the system.