POAR: Efficient Policy Optimization via Online Abstract State Representation Learning

While the rapid progress of deep learning fuels end-to-end reinforcement learning (RL), direct application, especially in high-dimensional space like robotic scenarios still suffers from low sample efficiency. Therefore State Representation Learning (SRL) is proposed to specifically learn to encode task-relevant features from complex sensory data into low-dimensional states. However, the pervasive implementation of SRL is usually conducted by a decoupling strategy in which the observation-state mapping is learned separately, which is prone to over-fit. To handle such problem, we summarize the state-of-the-art (SOTA) SRL sub-tasks in previous works and present a new algorithm called Policy Optimization via Abstract Representation which integrates SRL into the policy optimization phase. Firstly, We engage RL loss to assist in updating SRL model so that the states can evolve to meet the demand of RL and maintain a good physical interpretation. Secondly, we introduce a dynamic loss weighting mechanism so that both models can efficiently adapt to each other. Thirdly, we introduce a new SRL prior called domain resemblance to leverage expert demonstration to improve SRL interpretations. Finally, we provide a real-time access of state graph to monitor the course of learning. Experiments indicate that POAR significantly outperforms SOTA RL algorithms and decoupling SRL strategies in terms of sample efficiency and final rewards. We empirically verify POAR to efficiently handle tasks in high dimensions and facilitate training real-life robots directly from scratch.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods