Search Results for author: Juan Aparicio Ojea

Found 7 papers, 2 papers with code

Meta-Reinforcement Learning for Robotic Industrial Insertion Tasks

no code implementations29 Apr 2020 Gerrit Schoettler, Ashvin Nair, Juan Aparicio Ojea, Sergey Levine, Eugen Solowjow

Robotic insertion tasks are characterized by contact and friction mechanics, making them challenging for conventional feedback control methods due to unmodeled physical effects.

Friction Meta Reinforcement Learning +2

UniGrasp: Learning a Unified Model to Grasp with Multifingered Robotic Hands

1 code implementation24 Oct 2019 Lin Shao, Fabio Ferreira, Mikael Jorda, Varun Nambiar, Jianlan Luo, Eugen Solowjow, Juan Aparicio Ojea, Oussama Khatib, Jeannette Bohg

The majority of previous work has focused on developing grasp methods that generalize over novel object geometry but are specific to a certain robot hand.

Object valid

Domain Randomization for Active Pose Estimation

no code implementations10 Mar 2019 Xinyi Ren, Jianlan Luo, Eugen Solowjow, Juan Aparicio Ojea, Abhishek Gupta, Aviv Tamar, Pieter Abbeel

In this work, we investigate how to improve the accuracy of domain randomization based pose estimation.

Pose Estimation

Residual Reinforcement Learning for Robot Control

no code implementations7 Dec 2018 Tobias Johannink, Shikhar Bahl, Ashvin Nair, Jianlan Luo, Avinash Kumar, Matthias Loskyll, Juan Aparicio Ojea, Eugen Solowjow, Sergey Levine

In this paper, we study how we can solve difficult control problems in the real world by decomposing them into a part that is solved efficiently by conventional feedback control methods, and the residual which is solved with RL.

Friction reinforcement-learning +1

Learning Robotic Assembly from CAD

no code implementations20 Mar 2018 Garrett Thomas, Melissa Chien, Aviv Tamar, Juan Aparicio Ojea, Pieter Abbeel

We propose to leverage this prior knowledge by guiding RL along a geometric motion plan, calculated using the CAD data.

Motion Planning Reinforcement Learning (RL)

Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics

no code implementations27 Mar 2017 Jeffrey Mahler, Jacky Liang, Sherdil Niyaz, Michael Laskey, Richard Doan, Xinyu Liu, Juan Aparicio Ojea, Ken Goldberg

To reduce data collection time for deep learning of robust robotic grasp plans, we explore training from a synthetic dataset of 6. 7 million point clouds, grasps, and analytic grasp metrics generated from thousands of 3D models from Dex-Net 1. 0 in randomized poses on a table.

Robotics

Cannot find the paper you are looking for? You can Submit a new open access paper.