Procedural Generalization by Planning with Self-Supervised World Models

One of the key promises of model-based reinforcement learning is the ability to generalize using an internal model of the world to make predictions in novel environments and tasks. However, the generalization ability of model-based agents is not well understood because existing work has focused on model-free agents when benchmarking generalization. Here, we explicitly measure the generalization ability of model-based agents in comparison to their model-free counterparts. We focus our analysis on MuZero (Schrittwieser et al., 2020), a powerful model-based agent, and evaluate its performance on both procedural and task generalization. We identify three factors of procedural generalization -- planning, self-supervised representation learning, and procedural data diversity -- and show that by combining these techniques, we achieve state-of-the art generalization performance and data efficiency on Procgen (Cobbe et al., 2019). However, we find that these factors do not always provide the same benefits for the task generalization benchmarks in Meta-World (Yu et al., 2019), indicating that transfer remains a challenge and may require different approaches than procedural generalization. Overall, we suggest that building generalizable agents requires moving beyond the single-task, model-free paradigm and towards self-supervised model-based agents that are trained in rich, procedural, multi-task environments.

PDF Abstract ICLR 2022 PDF ICLR 2022 Abstract

Datasets


Results from the Paper


 Ranked #1 on Meta-Learning on ML10 (Meta-test success rate (zero-shot) metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Meta-Learning ML10 MZ+Recon Meta-train success rate 97.8% # 1
Meta-test success rate (zero-shot) 25 # 2
Meta-Learning ML10 MZ Meta-train success rate 97.6% # 2
Meta-test success rate (zero-shot) 26.5 # 1
Meta-Learning ML45 MZ+Recon Meta-train success rate 74.9 # 2
Meta-test success rate (zero-shot) 18.5 # 1
Meta-Learning ML45 MZ Meta-train success rate 77.2 # 1
Meta-test success rate (zero-shot) 17.7 # 2

Methods