GRAB is a dataset of full-body motions interacting and grasping 3D objects. It contains accurate finger and facial motions as well as the contact between the objects and body. It contains 5 male and 5 female participants and 4 different motion intents. The GRAB dataset also contains binary contact maps between the body and objects.
50 PAPERS • NO BENCHMARKS YET
Includes several sets of synthetic stereo images labelled with grasp rectangles representing parallel-jaw grasps (Cornell-like format).
1 PAPER • NO BENCHMARKS YET