Imitation Learning Methods

Primal Wasserstein Imitation Learning

Introduced by Dadashi et al. in Primal Wasserstein Imitation Learning

Primal Wasserstein Imitation Learning, or PWIL, is a method for imitation learning which ties to the primal form of the Wasserstein distance between the expert and the agent state-action distributions. The reward function is derived offline, as opposed to recent adversarial IL algorithms that learn a reward function through interactions with the environment, and requires little fine-tuning.

Source: Primal Wasserstein Imitation Learning

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Imitation Learning 3 37.50%
Decision Making 2 25.00%
Reinforcement Learning (RL) 2 25.00%
Continuous Control 1 12.50%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories