Markov decision processes with maximum entropy rate for Surveillance Tasks

23 Nov 2022  ·  Yu Chen, ShaoYuan Li, Xiang Yin ·

We consider the problem of synthesizing optimal policies for Markov decision processes (MDP) for both utility objective and security constraint. Specifically, our goal is to maximize the \emph{entropy rate} of the MDP while achieving a surveillance task in the sense that a given region of interest is visited infinitely often with probability one. Such a policy is of our interest since it guarantees both the completion of tasks and maximizes the \emph{unpredictability} of limit behavior of the system. Existing works either focus on the total entropy or do not consider the surveillance tasks which are not suitable for surveillance tasks with infinite horizon. We provide a complete solution to this problem. Specifically, we present an algorithm for synthesizing entropy rate maximizing policies for communicating MDPs. Then based on a new state classification method, we show the entropy rate maximization problem under surveillance task can be effectively solved in polynomial-time. We illustrate the proposed algorithm based on a case study of robot planning scenario.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here