Imitation Learning

507 papers with code • 0 benchmarks • 18 datasets

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Libraries

Use these libraries to find Imitation Learning models and implementations

Most implemented papers

Generative Adversarial Imitation Learning

hill-a/stable-baselines NeurIPS 2016

Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal.

End-to-end Driving via Conditional Imitation Learning

carla-simulator/imitation-learning 6 Oct 2017

However, driving policies trained via imitation learning cannot be controlled at test time.

Deep Q-learning from Demonstrations

opendilab/DI-engine 12 Apr 2017

We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism.

Behavioral Cloning from Observation

opendilab/DI-engine 4 May 2018

In this work, we propose a two-phase, autonomous imitation learning technique called behavioral cloning from observation (BCO), that aims to provide improved performance with respect to both of these aspects.

Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow

reinforcement-learning-kr/lets-do-irl ICLR 2019

By enforcing a constraint on the mutual information between the observations and the discriminator's internal representation, we can effectively modulate the discriminator's accuracy and maintain useful and informative gradients.

SQIL: Imitation Learning via Reinforcement Learning with Sparse Rewards

opendilab/DI-engine ICLR 2020

Theoretically, we show that SQIL can be interpreted as a regularized variant of BC that uses a sparsity prior to encourage long-horizon imitation.

Exact Combinatorial Optimization with Graph Convolutional Neural Networks

ds4dm/learn2branch NeurIPS 2019

Combinatorial optimization problems are typically tackled by the branch-and-bound paradigm.

IQ-Learn: Inverse soft-Q Learning for Imitation

Div99/IQ-Learn NeurIPS 2021

In many sequential decision-making problems (e. g., robotics control, game playing, sequential prediction), human or expert data is available containing useful information about the task.

InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations

ermongroup/InfoGAIL NeurIPS 2017

The goal of imitation learning is to mimic expert behavior without access to an explicit reward signal.

Self-Imitation Learning

junhyukoh/self-imitation-learning ICML 2018

This paper proposes Self-Imitation Learning (SIL), a simple off-policy actor-critic algorithm that learns to reproduce the agent's past good decisions.