|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
We evaluate our architecture on two publicly available datasets - EgoGesture and NVIDIA Dynamic Hand Gesture Datasets - which require temporal detection and classification of the performed hand gestures.
Our dataset and experiments can be of interest to communities of 3D hand pose estimation, 6D object pose, and robotics as well as action recognition.
Although skeleton-based action recognition has achieved great success in recent years, most of the existing methods may suffer from a large model size and slow execution speed.
Robust recognition of hand gestures in real-world applications is still an unaccomplished goal due to many remaining challenges, such as cluttered backgrounds and unconstrained environmental factors.
We propose a Dynamic Graph-Based Spatial-Temporal Attention (DG-STA) method for hand gesture recognition.
Gesture recognition is a hot topic in computer vision and pattern recognition, which plays a vitally important role in natural human-computer interface.
In this paper, we introduce a new 3D hand gesture recognition approach based on a deep learning model.