The IPN Hand dataset is a benchmark video dataset with sufficient size, variation, and real-world elements able to train and evaluate deep neural networks for continuous Hand Gesture Recognition (HGR).
5 PAPERS • NO BENCHMARKS YET
The VIVA challenge’s dataset is a multimodal dynamic hand gesture dataset specifically designed with difficult settings of cluttered background, volatile illumination, and frequent occlusion for studying natural human activities in real-world driving settings. This dataset was captured using a Microsoft Kinect device, and contains 885 intensity and depth video sequences of 19 different dynamic hand gestures performed by 8 subjects inside a vehicle.
4 PAPERS • 2 BENCHMARKS
Contains static tasks as well as a multitude of more dynamic tasks, involving larger motion of the hands. The dataset has 55 tremor patient recordings together with: associated ground truth accelerometer data from the most affected hand, RGB video data, and aligned depth data.
3 PAPERS • NO BENCHMARKS YET
MlGesture is a dataset for hand gesture recognition tasks, recorded in a car with 5 different sensor types at two different viewpoints. The dataset contains over 1300 hand gesture videos from 24 participants and features 9 different hand gesture symbols. One sensor cluster with five different cameras is mounted in front of the driver in the center of the dashboard. A second sensor cluster is mounted on the ceiling looking straight down.
1 PAPER • NO BENCHMARKS YET
The dataset consists of images of Human palms captured using a mobile phone. The images have been taken in a real-world scenario like holding objects or performing simple gestures. The dataset has a wide variety of variations like illumination, distances etc. It consists of images of 3 main gestures: Frontal-open palm, Back open palm and fist with the wrist. It also has a lot of images with people wearing gloves.
0 PAPER • NO BENCHMARKS YET