GRAB is a dataset of full-body motions interacting and grasping 3D objects. It contains accurate finger and facial motions as well as the contact between the objects and body. It contains 5 male and 5 female participants and 4 different motion intents. The GRAB dataset also contains binary contact maps between the body and objects.
28 PAPERS • NO BENCHMARKS YET
ContactDB is a dataset of contact maps for household objects that captures the rich hand-object contact that occurs during grasping, enabled by use of a thermal camera. ContactDB includes 3,750 3D meshes of 50 household objects textured with contact maps and 375K frames of synchronized RGB-D+thermal images.
17 PAPERS • 1 BENCHMARK
ContactPose is a dataset of hand-object contact paired with hand pose, object pose, and RGB-D images. ContactPose has 2306 unique grasps of 25 household objects grasped with 2 functional intents by 50 participants, and more than 2.9 M RGB-D grasp images.
15 PAPERS • 1 BENCHMARK
Robot grasping is often formulated as a learning problem. With the increasing speed and quality of physics simulations, generating large-scale grasping data sets that feed learning algorithms is becoming more and more popular. An often overlooked question is how to generate the grasps that make up these data sets. In this paper, we review, classify, and compare different grasp sampling strategies. Our evaluation is based on a fine-grained discretization of SE(3) and uses physics-based simulation to evaluate the quality and robustness of the corresponding parallel-jaw grasps. Specifically, we consider more than 1 billion grasps for each of the 21 objects from the YCB data set. This dense data set lets us evaluate existing sampling schemes w.r.t. their bias and efficiency. Our experiments show that some popular sampling schemes contain significant bias and do not cover all possible ways an object can be grasped.
2 PAPERS • NO BENCHMARKS YET
GAS (Grasp Area Segmentation) dataset consists of 10089 RGB images of cluttered scenes grouped into 1121 grasp-area segmentation tasks. For each RGB image we provide a binary segmentation map with the graspable and non-graspable regions for every object in the scene. The dataset can be used for meta-training part-based grasp area estimation networks.
1 PAPER • NO BENCHMARKS YET