A Game Theoretic Perspective on Model-Based Reinforcement Learning

We illustrate how game theory is a good framework to understand model-based reinforcement learning (MBRL). We point out that a large class of MBRL algorithms can be viewed as a game between two players: (1) a policy player, which attempts to maximize rewards under the learned model; (2) a model player, which attempts to fit the real-world data collected by the policy player. Their goals need not be aligned, and are often conflicting. We show that stable algorithms for MBRL can be derived by considering a Stackelberg game between the two players. This formulation gives rise to two natural schools of MBRL algorithms based on which player is chosen as the leader in the Stackelberg game, and together encapsulate many existing MBRL algorithms. Through experiments on a suite of continuous control tasks, we validate that algorithms based on our framework lead to stable and sample efficient learning.

PDF ICML 2020 PDF

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here