Utilization of domain knowledge to improve POMDP belief estimation

The partially observable Markov decision process (POMDP) framework is a common approach for decision making under uncertainty. Recently, multiple studies have shown that by integrating relevant domain knowledge into POMDP belief estimation, we can improve the learned policy's performance. In this study, we propose a novel method for integrating the domain knowledge into probabilistic belief update in POMDP framework using Jeffrey's rule and normalization. We show that the domain knowledge can be utilized to reduce the data requirement and improve performance for POMDP policy learning with RL.

Results in Papers With Code
(↓ scroll down to see all results)