Imitation Learning

519 papers with code • 0 benchmarks • 18 datasets

Imitation Learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Finally, a newer methodology, Inverse Q-Learning aims at directly learning Q-functions from expert data, implicitly representing rewards, under which the optimal policy can be given as a Boltzmann distribution similar to soft Q-learning

Source: Learning to Imitate

Libraries

Use these libraries to find Imitation Learning models and implementations

Latest papers with no code

Bootstrapping Linear Models for Fast Online Adaptation in Human-Agent Collaboration

no code yet • 16 Apr 2024

Initializing policies to maximize performance with unknown partners can be achieved by bootstrapping nonlinear models using imitation learning over large, offline datasets.

Unveiling Imitation Learning: Exploring the Impact of Data Falsity to Large Language Model

no code yet • 15 Apr 2024

Many recent studies endeavor to improve open-source language models through imitation learning, and re-training on the synthetic instruction data from state-of-the-art proprietary models like ChatGPT and GPT-4.

Adversarial Imitation Learning via Boosting

no code yet • 12 Apr 2024

In the weighted replay buffer, the contribution of the data from older policies are properly discounted with the weight computed based on the boosting framework.

AdaDemo: Data-Efficient Demonstration Expansion for Generalist Robotic Agent

no code yet • 11 Apr 2024

Encouraged by the remarkable achievements of language and vision foundation models, developing generalist robotic agents through imitation learning, using large demonstration datasets, has become a prominent area of interest in robot learning.

Reward Learning from Suboptimal Demonstrations with Applications in Surgical Electrocautery

no code yet • 10 Apr 2024

This paper introduces a sample-efficient method that learns a robust reward function from a limited amount of ranked suboptimal demonstrations consisting of partial-view point cloud observations.

CNN-based Game State Detection for a Foosball Table

no code yet • 8 Apr 2024

In the game of Foosball, a compact and comprehensive game state description consists of the positional shifts and rotations of the figures and the position of the ball over time.

SAFE-GIL: SAFEty Guided Imitation Learning

no code yet • 8 Apr 2024

The algorithm abstracts the imitation error as an adversarial disturbance in the system dynamics, injects it during data collection to expose the expert to safety critical states, and collects corrective actions.

Prompting Multi-Modal Tokens to Enhance End-to-End Autonomous Driving Imitation Learning with LLMs

no code yet • 7 Apr 2024

The utilization of Large Language Models (LLMs) within the realm of reinforcement learning, particularly as planners, has garnered a significant degree of attention in recent scholarly literature.

SENSOR: Imitate Third-Person Expert's Behaviors via Active Sensoring

no code yet • 4 Apr 2024

In many real-world visual Imitation Learning (IL) scenarios, there is a misalignment between the agent's and the expert's perspectives, which might lead to the failure of imitation.

DIDA: Denoised Imitation Learning based on Domain Adaptation

no code yet • 4 Apr 2024

Imitating skills from low-quality datasets, such as sub-optimal demonstrations and observations with distractors, is common in real-world applications.