Diversity is All You Need: Learning Skills without a Reward Function
Intelligent creatures can explore their environments and learn useful skills without supervision. In this paper, we propose DIAYN ('Diversity is All You Need'), a method for learning useful skills without a reward function. Our proposed method learns skills by maximizing an information theoretic objective using a maximum entropy policy. On a variety of simulated robotic tasks, we show that this simple objective results in the unsupervised emergence of diverse skills, such as walking and jumping. In a number of reinforcement learning benchmark environments, our method is able to learn a skill that solves the benchmark task despite never receiving the true task reward. We show how pretrained skills can provide a good parameter initialization for downstream tasks, and can be composed hierarchically to solve complex, sparse reward tasks. Our results suggest that unsupervised discovery of skills can serve as an effective pretraining mechanism for overcoming challenges of exploration and data efficiency in reinforcement learning.
PDF Abstract ICLR 2019 PDF ICLR 2019 AbstractCode
Datasets
Results from the Paper
Task | Dataset | Model | Metric Name | Metric Value | Global Rank | Benchmark |
---|---|---|---|---|---|---|
Unsupervised Reinforcement Learning | URLB (pixels, 10^5 frames) | DIAYN | Walker (mean normalized return) | 16.28±8.69 | # 5 | |
Quadruped (mean normalized return) | 24.69±7.80 | # 3 | ||||
Jaco (mean normalized return) | 8.11±2.88 | # 6 | ||||
Unsupervised Reinforcement Learning | URLB (pixels, 10^6 frames) | DIAYN | Walker (mean normalized return) | 16.58±9.70 | # 5 | |
Quadruped (mean normalized return) | 28.52±7.52 | # 3 | ||||
Jaco (mean normalized return) | 6.80±3.90 | # 6 | ||||
Unsupervised Reinforcement Learning | URLB (pixels, 2*10^6 frames) | DIAYN | Walker (mean normalized return) | 17.54±12.24 | # 6 | |
Quadruped (mean normalized return) | 29.67±10.31 | # 4 | ||||
Jaco (mean normalized return) | 5.32±1.67 | # 7 | ||||
Unsupervised Reinforcement Learning | URLB (pixels, 5*10^5 frames) | DIAYN | Walker (mean normalized return) | 15.73±9.38 | # 5 | |
Quadruped (mean normalized return) | 25.15±6.85 | # 3 | ||||
Jaco (mean normalized return) | 7.53±3.03 | # 6 | ||||
Unsupervised Reinforcement Learning | URLB (states, 10^5 frames) | DIAYN | Walker (mean normalized return) | 58.51±28.90 | # 8 | |
Quadruped (mean normalized return) | 27.69±7.23 | # 8 | ||||
Jaco (mean normalized return) | 23.94±7.82 | # 9 | ||||
Unsupervised Reinforcement Learning | URLB (states, 10^6 frames) | DIAYN | Walker (mean normalized return) | 59.63±32.99 | # 9 | |
Quadruped (mean normalized return) | 49.35±9.07 | # 4 | ||||
Jaco (mean normalized return) | 14.81±5.78 | # 9 | ||||
Unsupervised Reinforcement Learning | URLB (states, 2*10^6 frames) | DIAYN | Walker (mean normalized return) | 68.61±37.15 | # 9 | |
Quadruped (mean normalized return) | 58.98±13.24 | # 5 | ||||
Jaco (mean normalized return) | 9.47±2.92 | # 10 | ||||
Unsupervised Reinforcement Learning | URLB (states, 5*10^5 frames) | DIAYN | Walker (mean normalized return) | 59.57±28.88 | # 9 | |
Quadruped (mean normalized return) | 35.98±9.90 | # 5 | ||||
Jaco (mean normalized return) | 21.09±4.73 | # 9 |