Disentangled Predictive Representation for Meta-Reinforcement Learning

A major challenge in reinforcement learning is the design of agents that are able to generalize across tasks that share common dynamics. A viable solution is meta-reinforcement learning, which identifies common structures among past tasks to be then generalized to new tasks (meta-test). Prior works learn meta-representation jointly while solving tasks, resulting in representations that not generalize well across policies, leading to sampling-inefficiency during meta-test phases. In this work, we introduce state2vec, an efficient and low-complexity unsupervised framework for learning disentangled representations that are more general. The state embedding vectors learned with state2vec capture the geometry of the underlying state space, resulting in high-quality basis functions for linear value function approximation.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here