Imitation Learning

519 papers with code • 0 benchmarks • 18 datasets

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Libraries

Use these libraries to find Imitation Learning models and implementations

Latest papers with no code

Imitation Game: A Model-based and Imitation Learning Deep Reinforcement Learning Hybrid

no code yet • 2 Apr 2024

Autonomous and learning systems based on Deep Reinforcement Learning have firmly established themselves as a foundation for approaches to creating resilient and efficient Cyber-Physical Energy Systems.

RiEMann: Near Real-Time SE(3)-Equivariant Robot Manipulation without Point Cloud Segmentation

no code yet • 28 Mar 2024

RiEMann learns a manipulation task from scratch with 5 to 10 demonstrations, generalizes to unseen SE(3) transformations and instances of target objects, resists visual interference of distracting objects, and follows the near real-time pose change of the target object.

Keypoint Action Tokens Enable In-Context Imitation Learning in Robotics

no code yet • 28 Mar 2024

We show that off-the-shelf text-based Transformers, with no additional training, can perform few-shot in-context visual imitation learning, mapping visual observations to action sequences that emulate the demonstrator's behaviour.

Offline Imitation Learning from Multiple Baselines with Applications to Compiler Optimization

no code yet • 28 Mar 2024

This work studies a Reinforcement Learning (RL) problem in which we are given a set of trajectories collected with K baseline policies.

LORD: Large Models based Opposite Reward Design for Autonomous Driving

no code yet • 27 Mar 2024

Recently, large pretrained models have gained significant attention as zero-shot reward models for tasks specified with desired linguistic goals.

LASIL: Learner-Aware Supervised Imitation Learning For Long-term Microscopic Traffic Simulation

no code yet • 26 Mar 2024

Due to the covariate shift issue, existing imitation learning-based simulators often fail to generate stable long-term simulations.

Grounding Language Plans in Demonstrations Through Counterfactual Perturbations

no code yet • 25 Mar 2024

Grounding the common-sense reasoning of Large Language Models in physical domains remains a pivotal yet unsolved problem for embodied AI.

Dyna-LfLH: Learning Agile Navigation in Dynamic Environments from Learned Hallucination

no code yet • 25 Mar 2024

In our new Dynamic Learning from Learned Hallucination (Dyna-LfLH), we design and learn a novel latent distribution and sample dynamic obstacles from it, so the generated training data can be used to learn a motion planner to navigate in dynamic environments.

Interpretable Modeling of Deep Reinforcement Learning Driven Scheduling

no code yet • 24 Mar 2024

In this work, we present a framework called IRL (Interpretable Reinforcement Learning) to address the issue of interpretability of DRL scheduling.

IBCB: Efficient Inverse Batched Contextual Bandit for Behavioral Evolution History

no code yet • 24 Mar 2024

This poses a new challenge for existing imitation learning approaches that can only utilize data from experienced experts.