Search Results for author: Philippe Hansen-Estruch

Found 5 papers, 2 papers with code

Goal Representations for Instruction Following: A Semi-Supervised Language Interface to Control

no code implementations30 Jun 2023 Vivek Myers, Andre He, Kuan Fang, Homer Walke, Philippe Hansen-Estruch, Ching-An Cheng, Mihai Jalobeanu, Andrey Kolobov, Anca Dragan, Sergey Levine

Our method achieves robust performance in the real world by learning an embedding from the labeled data that aligns language not to the goal image, but rather to the desired change between the start and goal images that the instruction corresponds to.

Instruction Following

IDQL: Implicit Q-Learning as an Actor-Critic Method with Diffusion Policies

1 code implementation20 Apr 2023 Philippe Hansen-Estruch, Ilya Kostrikov, Michael Janner, Jakub Grudzien Kuba, Sergey Levine

In this paper, we reinterpret IQL as an actor-critic method by generalizing the critic objective and connecting it to a behavior-regularized implicit actor.

Offline RL Q-Learning

Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning

no code implementations27 Apr 2022 Philippe Hansen-Estruch, Amy Zhang, Ashvin Nair, Patrick Yin, Sergey Levine

We learn this representation using a metric form of this abstraction, and show its ability to generalize to new goals in simulation manipulation tasks.

reinforcement-learning Reinforcement Learning (RL)

GEM: Group Enhanced Model for Learning Dynamical Control Systems

no code implementations7 Apr 2021 Philippe Hansen-Estruch, Wenling Shang, Lerrel Pinto, Pieter Abbeel, Stas Tiomkin

In this work, we take advantage of these structures to build effective dynamical models that are amenable to sample-based learning.

Continuous Control Model-based Reinforcement Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.