GRAB is a dataset of full-body motions interacting and grasping 3D objects. It contains accurate finger and facial motions as well as the contact between the objects and body. It contains 5 male and 5 female participants and 4 different motion intents. The GRAB dataset also contains binary contact maps between the body and objects.
Source: https://github.com/otaheri/GRABPaper | Code | Results | Date | Stars |
---|