The EgoHands dataset contains 48 Google Glass videos of complex, first-person interactions between two people. The main intention of this dataset is to enable better, data-driven approaches to understanding hands in first-person computer vision. The dataset offers
22 PAPERS • NO BENCHMARKS YET
The HandNet dataset contains depth images of 10 participants' hands non-rigidly deforming in front of a RealSense RGB-D camera. The annotations are generated by a magnetic annotation technique. 6D pose is available for the center of the hand as well as the five fingertips (i.e. position and orientation of each).
7 PAPERS • NO BENCHMARKS YET
Includes egocentric videos containing hands in the wild.
4 PAPERS • NO BENCHMARKS YET
The ObMan-Ego is a large-scale synthetic hand dataset with egocentric scenes in which the simulated hands are provided by ObMan. The dataset is used for a hand segmentation task and its sim-to-real adaptation benchmark. Training, validation, and testing sets contain 150, 000, 6, 500, and 6, 500 images, respectively.
1 PAPER • NO BENCHMARKS YET