Grasp Generation

7 papers with code • 0 benchmarks • 3 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Grasping Field: Learning Implicit Representations for Human Grasps

korrawe/grasping_field_demo 10 Aug 2020

Specifically, our generative model is able to synthesize high-quality human grasps, given only on a 3D object point cloud.

GRAB: A Dataset of Whole-Body Human Grasping of Objects

otaheri/GRAB ECCV 2020

Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time.

6-DOF GraspNet: Variational Grasp Generation for Object Manipulation

NVlabs/6dof-graspnet ICCV 2019

We evaluate our approach in simulation and real-world robot experiments.

Orientation Attentive Robotic Grasp Synthesis with Augmented Grasp Map Representation

nickgkan/orange 9 Jun 2020

Inherent morphological characteristics in objects may offer a wide range of plausible grasping orientations that obfuscates the visual learning of robotic grasping.

Contact-GraspNet: Efficient 6-DoF Grasp Generation in Cluttered Scenes

NVlabs/contact_graspnet 25 Mar 2021

Our novel grasp representation treats 3D points of the recorded point cloud as potential grasp contacts.

CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

wenbowen123/catgrasp 19 Sep 2021

This work proposes a framework to learn task-relevant grasping for industrial objects without the need of time-consuming real-world data collection or manual annotation.

OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object Interaction

lixiny/oakink CVPR 2022

We start to collect 1, 800 common household objects and annotate their affordances to construct the first knowledge base: Oak.