Large-Scale Study of Curiosity-Driven Learning

Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is a type of intrinsic reward function which uses prediction error as reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance, and a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many game environments. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at https://pathak22.github.io/large-scale-curiosity/

PDF Abstract ICLR 2019 PDF ICLR 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Atari Games Atari 2600 Freeway Intrinsic Reward Agent Score 32.8 # 23
Atari Games Atari 2600 Gravitar Intrinsic Reward Agent Score 1165.1 # 21
Atari Games Atari 2600 Montezuma's Revenge Intrinsic Reward Agent Score 2504.6 # 14
Atari Games Atari 2600 Private Eye Intrinsic Reward Agent Score 3036.5 # 16
Atari Games Atari 2600 Venture Intrinsic Reward Agent Score 416 # 24

Methods


No methods listed for this paper. Add relevant methods here