The MineRL Competition on Sample Efficient Reinforcement Learning using Human Priors

22 Apr 2019William H. GussCayden CodelKatja HofmannBrandon HoughtonNoboru KunoStephanie MilaniSharada MohantyDiego Perez LiebanaRuslan SalakhutdinovNicholay TopinManuela VelosoPhillip Wang

Though deep reinforcement learning has led to breakthroughs in many difficult domains, these successes have required an ever-increasing number of samples. As state-of-the-art reinforcement learning (RL) systems require an exponentially increasing number of samples, their development is restricted to a continually shrinking segment of the AI community... (read more)

PDF Abstract


No code implementations yet. Submit your code now

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper

🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet