Search Results for author: Alex X. Lee

Found 9 papers, 6 papers with code

Learning Visual Servoing with Deep Features and Fitted Q-Iteration

2 code implementations31 Mar 2017 Alex X. Lee, Sergey Levine, Pieter Abbeel

Our approach is based on servoing the camera in the space of learned visual features, rather than image pixels or manually-designed keypoints.

reinforcement-learning Reinforcement Learning (RL)

Self-Supervised Visual Planning with Temporal Skip Connections

3 code implementations15 Oct 2017 Frederik Ebert, Chelsea Finn, Alex X. Lee, Sergey Levine

One learning signal that is always available for autonomously collected data is prediction: if a robot can learn to predict the future, it can use this predictive model to take actions to produce desired outcomes, such as moving an object to a particular location.

Video Prediction

Self-Supervised Learning of Object Motion Through Adversarial Video Prediction

no code implementations ICLR 2018 Alex X. Lee, Frederik Ebert, Richard Zhang, Chelsea Finn, Pieter Abbeel, Sergey Levine

In this paper, we study the problem of multi-step video prediction, where the goal is to predict a sequence of future frames conditioned on a short context.

Object Self-Supervised Learning +1

Stochastic Adversarial Video Prediction

4 code implementations ICLR 2019 Alex X. Lee, Richard Zhang, Frederik Ebert, Pieter Abbeel, Chelsea Finn, Sergey Levine

However, learning to predict raw future observations, such as frames in a video, is exceedingly challenging -- the ambiguous nature of the problem can cause a naively designed model to average together possible futures into a single, blurry prediction.

 Ranked #1 on Video Prediction on KTH (Cond metric)

Representation Learning Video Generation +1

Robustness via Retrying: Closed-Loop Robotic Manipulation with Self-Supervised Learning

3 code implementations6 Oct 2018 Frederik Ebert, Sudeep Dasari, Alex X. Lee, Sergey Levine, Chelsea Finn

We demonstrate that this idea can be combined with a video-prediction based controller to enable complex behaviors to be learned from scratch using only raw visual inputs, including grasping, repositioning objects, and non-prehensile manipulation.

Image Registration Self-Supervised Learning +1

How to Spend Your Robot Time: Bridging Kickstarting and Offline Reinforcement Learning for Vision-based Robotic Manipulation

no code implementations6 May 2022 Alex X. Lee, Coline Devin, Jost Tobias Springenberg, Yuxiang Zhou, Thomas Lampe, Abbas Abdolmaleki, Konstantinos Bousmalis

Our analysis, both in simulation and in the real world, shows that our approach is the best across data budgets, while standard offline RL from teacher rollouts is surprisingly effective when enough data is given.

Offline RL Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.