Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models

3 Jul 2015  ·  Bradly C. Stadie, Sergey Levine, Pieter Abbeel ·

Achieving efficient and scalable exploration in complex domains poses a major challenge in reinforcement learning. While Bayesian and PAC-MDP approaches to the exploration problem offer strong formal guarantees, they are often impractical in higher dimensions due to their reliance on enumerating the state-action space. Hence, exploration in complex domains is often performed with simple epsilon-greedy methods. In this paper, we consider the challenging Atari games domain, which requires processing raw pixel inputs and delayed rewards. We evaluate several more sophisticated exploration strategies, including Thompson sampling and Boltzman exploration, and propose a new exploration method based on assigning exploration bonuses from a concurrently learned model of the system dynamics. By parameterizing our learned model with a neural network, we are able to develop a scalable and efficient approach to exploration bonuses that can be applied to tasks with complex, high-dimensional state spaces. In the Atari domain, our method provides the most consistent improvement across a range of games that pose a major challenge for prior methods. In addition to raw game-scores, we also develop an AUC-100 metric for the Atari Learning domain to evaluate the impact of exploration on this benchmark.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Atari Games Atari 2600 Freeway MP-EB Score 27.0 # 42
Atari Games Atari 2600 Frostbite MP-EB Score 507.0 # 38
Atari Games Atari 2600 Montezuma's Revenge MP-EB Score 142 # 25
Atari Games Atari 2600 Q*Bert MP-EB Score 15805 # 24
Atari Games Atari 2600 Venture MP-EB Score 0.0 # 49

Methods


No methods listed for this paper. Add relevant methods here