Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning

19 Feb 2020Noah Y. SiegelJost Tobias SpringenbergFelix BerkenkampAbbas AbdolmalekiMichael NeunertThomas LampeRoland HafnerNicolas HeessMartin Riedmiller

Off-policy reinforcement learning algorithms promise to be applicable in settings where only a fixed data-set (batch) of environment interactions is available and no new experience can be acquired. This property makes these algorithms appealing for real world problems such as robot control... (read more)

PDF Abstract


No code implementations yet. Submit your code now

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper

🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet