On Generalization of Adversarial Imitation Learning and Beyond

19 Jun 2021  ·  Tian Xu, Ziniu Li, Yang Yu, Zhi-Quan Luo ·

Despite massive empirical evaluations, one of the fundamental questions in imitation learning is still not fully settled: does AIL (adversarial imitation learning) provably generalize better than BC (behavioral cloning)? We study this open problem with tabular and episodic MDPs. For vanilla AIL that uses the direct maximum likelihood estimation, we provide both negative and positive answers under the known transition setting. For some MDPs, we show that vanilla AIL has a worse sample complexity than BC. The key insight is that the state-action distribution matching principle is weak so that AIL may generalize poorly even on visited states from the expert demonstrations. For another class of MDPs, vanilla AIL is proved to generalize well even on non-visited states. Interestingly, its sample complexity is horizon-free, which provably beats BC by a wide margin. Finally, we establish a framework in the unknown transition scenario, which allows AIL to explore via reward-free exploration strategies. Compared with the best-known online apprenticeship learning algorithm, the resulting algorithm improves the sample complexity and interaction complexity.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here