Efficient Heatmap-Guided 6-Dof Grasp Detection in Cluttered Scenes

Fast and robust object grasping in clutter is a crucial component of robotics. Most current works resort to the whole observed point cloud for 6-Dof grasp generation, ignoring the guidance information excavated from global semantics, thus limiting high-quality grasp generation and real-time performance. In this work, we show that the widely used heatmaps are underestimated in the efficiency of 6-Dof grasp generation. Therefore, we propose an effective local grasp generator combined with grasp heatmaps as guidance, which infers in a global-to-local semantic-to-point way. Specifically, Gaussian encoding and the grid-based strategy are applied to predict grasp heatmaps as guidance to aggregate local points into graspable regions and provide global semantic information. Further, a novel non-uniform anchor sampling mechanism is designed to improve grasp accuracy and diversity. Benefiting from the high-efficiency encoding in the image space and focusing on points in local graspable regions, our framework can perform high-quality grasp detection in real-time and achieve state-of-the-art results. In addition, real robot experiments demonstrate the effectiveness of our method with a success rate of 94% and a clutter completion rate of 100%. Our code is available at https://github.com/THU-VCLab/HGGD.

PDF Abstract IEEE ROBOTICS 2023 PDF

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Robotic Grasping GraspNet-1Billion HGGD AP_similar 51.20 # 5
AP_novel 22.17 # 5
AP_seen 59.36 # 5
mAP 44.24 # 5
Robotic Grasping GraspNet-1Billion HGGD-CD AP_similar 53.59 # 4
AP_novel 24.59 # 3
AP_seen 64.45 # 3
mAP 47.54 # 4

Methods


No methods listed for this paper. Add relevant methods here