InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations

NeurIPS 2017 Yunzhu Li • Jiaming Song • Stefano Ermon

The goal of imitation learning is to mimic expert behavior without access to an explicit reward signal. Expert demonstrations provided by humans, however, often show significant variability due to latent factors that are typically not explicitly modeled. In this paper, we propose a new algorithm that can infer the latent structure of expert demonstrations in an unsupervised way.

Full paper


No evaluation results yet. Help compare this paper to other papers by submitting the tasks and evaluation metrics from the paper.