Hand Gesture Recognition
41 papers with code • 18 benchmarks • 14 datasets
Hand gesture recognition (HGR) is a subarea of Computer Vision where the focus is on classifying a video or image containing a dynamic or static, respectively, hand gesture. In the static case, gestures are also generally called poses. HGR can also be performed with point cloud or joint hand data.
Datasets
Most implemented papers
Improving the Performance of Unimodal Dynamic Hand-Gesture Recognition with Multimodal Training
We present an efficient approach for leveraging the knowledge from multiple modalities in training unimodal 3D convolutional neural networks (3D-CNNs) for the task of dynamic hand gesture recognition.
Fast and Robust Dynamic Hand Gesture Recognition via Key Frames Extraction and Feature Fusion
Gesture recognition is a hot topic in computer vision and pattern recognition, which plays a vitally important role in natural human-computer interface.
Construct Dynamic Graphs for Hand Gesture Recognition via Spatial-Temporal Attention
We propose a Dynamic Graph-Based Spatial-Temporal Attention (DG-STA) method for hand gesture recognition.
MMTM: Multimodal Transfer Module for CNN Fusion
In late fusion, each modality is processed in a separate unimodal Convolutional Neural Network (CNN) stream and the scores of each modality are fused at the end.
IPN Hand: A Video Dataset and Benchmark for Real-Time Continuous Hand Gesture Recognition
The experimental results show that the state-of-the-art ResNext-101 model decreases about 30% accuracy when using our real-world dataset, demonstrating that the IPN Hand dataset can be used as a benchmark, and may help the community to step forward in the continuous HGR.
Low-latency hand gesture recognition with a low resolution thermal imager
Using hand gestures to answer a call or to control the radio while driving a car, is nowadays an established feature in more expensive cars.
An Efficient PointLSTM for Point Clouds Based Gesture Recognition
The proposed PointLSTM combines state information from neighboring points in the past with current features to update the current states by a weight-shared LSTM layer.
TinyRadarNN: Combining Spatial and Temporal Convolutional Neural Networks for Embedded Gesture Recognition with Short Range Radars
Furthermore, the gesture recognition classifier has been implemented on a Parallel Ultra-Low Power Processor, demonstrating that real-time prediction is feasible with only 21 mW of power consumption for the full TCN sequence prediction network, while a system-level power consumption of less than 100 mW is achieved.
Force myography benchmark data for hand gesture recognition and transfer learning
We contribute to the advancement of this field by making accessible a benchmark dataset collected using a commercially available sensor setup from 20 persons covering 18 unique gestures, in the hope of allowing further comparison of results as well as easier entry into this field of research.
Studying Person-Specific Pointing and Gaze Behavior for Multimodal Referencing of Outside Objects from a Moving Vehicle
Hand pointing and eye gaze have been extensively investigated in automotive applications for object selection and referencing.