Grasp Generation

15 papers with code • 0 benchmarks • 3 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Grasping Field: Learning Implicit Representations for Human Grasps

korrawe/grasping_field_demo 10 Aug 2020

Specifically, our generative model is able to synthesize high-quality human grasps, given only on a 3D object point cloud.

6-DOF GraspNet: Variational Grasp Generation for Object Manipulation

NVlabs/6dof-graspnet ICCV 2019

We evaluate our approach in simulation and real-world robot experiments.

GRAB: A Dataset of Whole-Body Human Grasping of Objects

otaheri/GRAB ECCV 2020

Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time.

Diffusion-based Generation, Optimization, and Planning in 3D Scenes

scenediffuser/Scene-Diffuser CVPR 2023

SceneDiffuser provides a unified model for solving scene-conditioned generation, optimization, and planning.

Orientation Attentive Robotic Grasp Synthesis with Augmented Grasp Map Representation

nickgkan/orange 9 Jun 2020

Inherent morphological characteristics in objects may offer a wide range of plausible grasping orientations that obfuscates the visual learning of robotic grasping.

Contact-GraspNet: Efficient 6-DoF Grasp Generation in Cluttered Scenes

NVlabs/contact_graspnet 25 Mar 2021

Our novel grasp representation treats 3D points of the recorded point cloud as potential grasp contacts.

CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

wenbowen123/catgrasp 19 Sep 2021

This work proposes a framework to learn task-relevant grasping for industrial objects without the need of time-consuming real-world data collection or manual annotation.

OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object Interaction

lixiny/oakink CVPR 2022

We start to collect 1, 800 common household objects and annotate their affordances to construct the first knowledge base: Oak.

PEGG-Net: Pixel-Wise Efficient Grasp Generation in Complex Scenes

hzwang96/pegg-net 30 Mar 2022

Vision-based grasp estimation is an essential part of robotic manipulation tasks in the real world.

Keypoint-GraspNet: Keypoint-based 6-DoF Grasp Generation from the Monocular RGB-D input

ivalab/kgn 19 Sep 2022

Great success has been achieved in the 6-DoF grasp learning from the point cloud input, yet the computational cost due to the point set orderlessness remains a concern.