|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We evaluate our architecture on two publicly available datasets - EgoGesture and NVIDIA Dynamic Hand Gesture Datasets - which require temporal detection and classification of the performed hand gestures.
Ranked #1 on Hand Gesture Recognition on EgoGesture
Our dataset and experiments can be of interest to communities of 3D hand pose estimation, 6D object pose, and robotics as well as action recognition.
Although skeleton-based action recognition has achieved great success in recent years, most of the existing methods may suffer from a large model size and slow execution speed.
Based on this new large-scale dataset, we are able to experiment with several deep learning methods for word-level sign recognition and evaluate their performances in large scale scenarios.
The proposed algorithm uses a single network to predict the probabilities of finger class and positions of fingertips in one forward propagation of the network.
Acquiring spatio-temporal states of an action is the most crucial step for action classification.
Ranked #1 on Hand Gesture Recognition on ChaLean test
The proposed PointLSTM combines state information from neighboring points in the past with current features to update the current states by a weight-shared LSTM layer.
Ranked #1 on Hand Gesture Recognition on SHREC 2017
Gesture recognition is a hot topic in computer vision and pattern recognition, which plays a vitally important role in natural human-computer interface.
Ranked #1 on Hand Gesture Recognition on Cambridge
In late fusion, each modality is processed in a separate unimodal Convolutional Neural Network (CNN) stream and the scores of each modality are fused at the end.
Ranked #5 on Action Recognition on NTU RGB+D
We propose a Dynamic Graph-Based Spatial-Temporal Attention (DG-STA) method for hand gesture recognition.
Ranked #1 on Hand Gesture Recognition on DHG-28