Real-Time Hand Gesture Recognition: Integrating Skeleton-Based Data Fusion and Multi-Stream CNN

21 Jun 2024  ·  Oluwaleke Yusuf, Maki Habib, Mohamed Moustafa ·

This study focuses on Hand Gesture Recognition (HGR), which is vital for perceptual computing across various real-world contexts. The primary challenge in the HGR domain lies in dealing with the individual variations inherent in human hand morphology. To tackle this challenge, we introduce an innovative HGR framework that combines data-level fusion and an Ensemble Tuner Multi-stream CNN architecture. This approach effectively encodes spatiotemporal gesture information from the skeleton modality into RGB images, thereby minimizing noise while improving semantic gesture comprehension. Our framework operates in real-time, significantly reducing hardware requirements and computational complexity while maintaining competitive performance on benchmark datasets such as SHREC2017, DHG1428, FPHA, LMDHG and CNR. This improvement in HGR demonstrates robustness and paves the way for practical, real-time applications that leverage resource-limited devices for human-machine interaction and ambient intelligence.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Hand Gesture Recognition DHG-14 e2eET Accuracy 95.83 # 1
Hand Gesture Recognition DHG-28 e2eET Accuracy 92.38 # 1
Skeleton Based Action Recognition First-Person Hand Action Benchmark e2eET 1:1 Accuracy 91.83 # 3
Skeleton Based Action Recognition SBU e2eET Accuracy 93.96 # 8
Hand Gesture Recognition SHREC 2017 e2eET 14 Gestures Accuracy 97.86 # 1
28 Gestures Accuracy 95.36 # 1

Methods


No methods listed for this paper. Add relevant methods here