62 papers with code • 12 benchmarks • 12 datasets
Gesture Recognition is an active field of research with applications such as automatic recognition of sign language, interaction of humans and robots or for new ways of controlling video games.
Our in-lab study shows that GesturePod achieves 92% gesture recognition accuracy and can help perform common smartphone tasks faster.
Ranked #1 on Gesture Recognition on GesturePod
The proposed innovative XceptionTime is designed by integration of depthwise separable convolutions, adaptive average pooling, and a novel non-linear normalization technique.
Multivariate time series (MTS) arise when multiple interconnected sensors record data over time.
On this basis, a new variant of LSTM is derived, in which the convolutional structures are only embedded into the input-to-state transition of LSTM.
Based on this new large-scale dataset, we are able to experiment with several deep learning methods for word-level sign recognition and evaluate their performances in large scale scenarios.
The proposed algorithm uses a single convolutional neural network to predict the probabilities of finger class and positions of fingertips in one forward propagation.
SLR seeks to recognize a sequence of continuous signs but neglects the underlying rich grammatical and linguistic structures of sign language that differ from spoken language.
The proposed PointLSTM combines state information from neighboring points in the past with current features to update the current states by a weight-shared LSTM layer.
Ranked #1 on Hand Gesture Recognition on NVGesture
Gesture recognition is a hot topic in computer vision and pattern recognition, which plays a vitally important role in natural human-computer interface.
Ranked #1 on Hand Gesture Recognition on Cambridge