Exploiting Hierarchy for Learning and Transfer in KL-regularized RL

18 Mar 2019Dhruva TirumalaHyeonwoo NohAlexandre GalashovLeonard HasencleverArun AhujaGreg WayneRazvan PascanuYee Whye TehNicolas Heess

As reinforcement learning agents are tasked with solving more challenging and diverse tasks, the ability to incorporate prior knowledge into the learning system and to exploit reusable structure in solution space is likely to become increasingly important. The KL-regularized expected reward objective constitutes one possible tool to this end... (read more)

PDF Abstract


No code implementations yet. Submit your code now

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.