no code implementations • 24 Jun 2021 • Katie Kang, Gregory Kahn, Sergey Levine
In this work, we propose a deep reinforcement learning algorithm with hierarchically integrated models (HInt).
no code implementations • 17 Dec 2020 • Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart, Sergey Levine
We propose a learning-based navigation system for reaching visually indicated goals and demonstrate this system on a real mobile robot platform.
1 code implementation • 9 Oct 2020 • Gregory Kahn, Pieter Abbeel, Sergey Levine
However, we believe that these disengagements not only show where the system fails, which is useful for troubleshooting, but also provide a direct learning signal by which the robot can learn to navigate.
2 code implementations • 23 Apr 2020 • Suneel Belkhale, Rachel Li, Gregory Kahn, Rowan Mcallister, Roberto Calandra, Sergey Levine
Our experiments demonstrate that our online adaptation approach outperforms non-adaptive methods on a series of challenging suspended payload transportation tasks.
1 code implementation • 13 Feb 2020 • Gregory Kahn, Pieter Abbeel, Sergey Levine
Mobile robot navigation is typically regarded as a geometric problem, in which the robot's objective is to perceive the geometry of the environment in order to plan collision-free paths towards a desired goal.
1 code implementation • 11 Feb 2019 • Katie Kang, Suneel Belkhale, Gregory Kahn, Pieter Abbeel, Sergey Levine
Deep reinforcement learning provides a promising approach for vision-based control of real-world robots.
no code implementations • 27 Dec 2018 • Rowan McAllister, Gregory Kahn, Jeff Clune, Sergey Levine
Our method estimates an uncertainty measure about the model's prediction, taking into account an explicit (generative) model of the observation distribution to handle out-of-distribution inputs.
1 code implementation • 16 Oct 2018 • Gregory Kahn, Adam Villaflor, Pieter Abbeel, Sergey Levine
We show that a simulated robotic car and a real-world RC car can gather data and train fully autonomously without any human-provided labels beyond those needed to train the detectors, and then at test-time be able to accomplish a variety of different tasks.
no code implementations • 14 Nov 2017 • Anusha Nagabandi, Guangzhao Yang, Thomas Asmar, Ravi Pandya, Gregory Kahn, Sergey Levine, Ronald S. Fearing
We present an approach for controlling a real-world legged millirobot that is based on learned neural network models.
2 code implementations • 29 Sep 2017 • Gregory Kahn, Adam Villaflor, Bosen Ding, Pieter Abbeel, Sergey Levine
To address the need to learn complex policies with few samples, we propose a generalized computation graph that subsumes value-based model-free methods and model-based methods, with specific instantiations interpolating between model-free and model-based.
8 code implementations • 8 Aug 2017 • Anusha Nagabandi, Gregory Kahn, Ronald S. Fearing, Sergey Levine
Model-free deep reinforcement learning algorithms have been shown to be capable of learning a wide range of robotic skills, but typically require a very large number of samples to achieve good performance.
Model-based Reinforcement Learning
reinforcement-learning
+1
no code implementations • 3 Feb 2017 • Gregory Kahn, Adam Villaflor, Vitchyr Pong, Pieter Abbeel, Sergey Levine
However, practical deployment of reinforcement learning methods must contend with the fact that the training process itself can be unsafe for the robot.
no code implementations • 2 Mar 2016 • Gregory Kahn, Tianhao Zhang, Sergey Levine, Pieter Abbeel
PLATO also maintains the MPC cost as an objective to avoid highly undesirable actions that would result from strictly following the learned policy before it has been fully trained.
no code implementations • 22 Sep 2015 • Tianhao Zhang, Gregory Kahn, Sergey Levine, Pieter Abbeel
We propose to combine MPC with reinforcement learning in the framework of guided policy search, where MPC is used to generate data at training time, under full state observations provided by an instrumented training environment.