Anderson Acceleration for Partially Observable Markov Decision Processes: A Maximum Entropy Approach

28 Nov 2022  ·  MinGyu Park, Jaeuk Shin, Insoon Yang ·

Partially observable Markov decision processes (POMDPs) is a rich mathematical framework that embraces a large class of complex sequential decision-making problems under uncertainty with limited observations. However, the complexity of POMDPs poses various computational challenges, motivating the need for an efficient algorithm that rapidly finds a good enough suboptimal solution. In this paper, we propose a novel accelerated offline POMDP algorithm exploiting Anderson acceleration (AA) that is capable of efficiently solving fixed-point problems using previous solution estimates. Our algorithm is based on the Q-function approximation (QMDP) method to alleviate the scalability issue inherent in POMDPs. Inspired by the quasi-Newton interpretation of AA, we propose a maximum entropy variant of QMDP, which we call soft QMDP, to fully benefit from AA. We prove that the overall algorithm converges to the suboptimal solution obtained by soft QMDP. Our algorithm can also be implemented in a model-free manner using simulation data. Provable error bounds on the residual and the solution are provided to examine how the simulation errors are propagated through the proposed algorithm. Finally, the performance of our algorithm is tested on several benchmark problems. According to the results of our experiments, the proposed algorithm converges significantly faster without degrading the solution quality compared to its standard counterparts.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here