Learning Invariant Reward Functions through Trajectory Interventions

29 Sep 2021  ·  Ivan Ovinnikov, Eugene Bykovets, Joachim M. Buhmann ·

Inverse reinforcement learning methods aim to retrieve the reward function of a Markov decision process based on a dataset of expert demonstrations. The commonplace scarcity of such demonstrations potentially leads to the absorption of spurious correlations in the data by the learning model, which as a result, exhibits behavioural overfitting to the expert dataset when trained on the obtained reward function. We study the generalization properties of the maximum entropy method for solving the inverse reinforcement learning problem for both exact and approximate formulations and demonstrate that by applying an instantiation of the invariant risk minimization principle, we can recover reward functions which induce better performing policies across domains in the transfer setting.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here