Robotic Grasp Detection using Deep Convolutional Neural Networks

24 Nov 2016  ·  Sulabh Kumra, Christopher Kanan ·

Deep learning has significantly advanced computer vision and natural language processing. While there have been some successes in robotics using deep learning, it has not been widely adopted. In this paper, we present a novel robotic grasp detection system that predicts the best grasping pose of a parallel-plate robotic gripper for novel objects using the RGB-D image of the scene. The proposed model uses a deep convolutional neural network to extract features from the scene and then uses a shallow convolutional neural network to predict the grasp configuration for the object of interest. Our multi-modal model achieved an accuracy of 89.21% on the standard Cornell Grasp Dataset and runs at real-time speeds. This redefines the state-of-the-art for robotic grasp detection.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Robotic Grasping Cornell Grasp Dataset Multi-Modal Grasp Predictor 5 fold cross validation 89.21 # 4

Methods


No methods listed for this paper. Add relevant methods here