Plan-Based Asymptotically Equivalent Reward Shaping

ICLR 2021  ·  Ingmar Schubert, Ozgur S Oguz, Marc Toussaint ·

In high-dimensional state spaces, the usefulness of Reinforcement Learning (RL) is limited by the problem of exploration. This issue has been addressed using potential-based reward shaping (PB-RS) previously. In the present work, we introduce Asymptotically Equivalent Reward Shaping (ASEQ-RS). ASEQ-RS relaxes the strict optimality guarantees of PB-RS to a guarantee of asymptotic equivalence. Being less restrictive, ASEQ-RS allows for reward shaping functions that are even better suited for improving the sample efficiency of RL algorithms. In particular, we consider settings in which the agent has access to an approximate plan. Here, we use examples of simulated robotic manipulation tasks to demonstrate that plan-based ASEQ-RS can indeed significantly improve the sample efficiency of RL over plan-based PB-RS.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here