Search Results for author: Peter Pastor

Found 9 papers, 3 papers with code

How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned

no code implementations4 Feb 2021 Julian Ibarz, Jie Tan, Chelsea Finn, Mrinal Kalakrishnan, Peter Pastor, Sergey Levine

Learning to perceive and move in the real world presents numerous challenges, some of which are easier to address than others, and some of which are often not considered in RL research that focuses only on simulated domains.

reinforcement-learning Reinforcement Learning (RL)

Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping

no code implementations1 Oct 2019 Cristian Bodnar, Adrian Li, Karol Hausman, Peter Pastor, Mrinal Kalakrishnan

The absence of an actor in Q2-Opt allows us to directly draw a parallel to the previous discrete experiments in the literature without the additional complexities induced by an actor-critic architecture.

Q-Learning Reinforcement Learning (RL) +1

Learning Probabilistic Multi-Modal Actor Models for Vision-Based Robotic Grasping

no code implementations15 Apr 2019 Mengyuan Yan, Adrian Li, Mrinal Kalakrishnan, Peter Pastor

Our actor model reduces the inference time by 3 times compared to the state-of-the-art CEM method.

Robotic Grasping

Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping

1 code implementation22 Sep 2017 Konstantinos Bousmalis, Alex Irpan, Paul Wohlhart, Yunfei Bai, Matthew Kelcey, Mrinal Kalakrishnan, Laura Downs, Julian Ibarz, Peter Pastor, Kurt Konolige, Sergey Levine, Vincent Vanhoucke

We extensively evaluate our approaches with a total of more than 25, 000 physical test grasps, studying a range of simulation conditions and domain adaptation methods, including a novel extension of pixel-level domain adaptation that we term the GraspGAN.

Domain Adaptation Industrial Robots +1

End-to-End Learning of Semantic Grasping

no code implementations6 Jul 2017 Eric Jang, Sudheendra Vijayanarasimhan, Peter Pastor, Julian Ibarz, Sergey Levine

We consider the task of semantic robotic grasping, in which a robot picks up an object of a user-specified class using only monocular images.

Object object-detection +3

Cannot find the paper you are looking for? You can Submit a new open access paper.