FastV2C-HandNet: Fast Voxel to Coordinate Hand Pose Estimation with 3D Convolutional Neural Networks

15 Jul 2019  ·  Rohan Lekhwani, Bhupendra Singh ·

Hand pose estimation from monocular depth images has been an important and challenging problem in the Computer Vision community. In this paper, we present a novel approach to estimate 3D hand joint locations from 2D depth images. Unlike most of the previous methods, our model captures the 3D spatial information from a depth image thereby giving it a greater understanding of the input. We voxelize the input depth map to capture the 3D features of the input and perform 3D data augmentations to make our network robust to real-world images. Our network is trained in an end-to-end manner which reduces time and space complexity significantly when compared to other methods. Through extensive experiments, we show that our model outperforms state-of-the-art methods with respect to the time it takes to train and predict 3D hand joint locations. This makes our method more suitable for real-world hand pose estimation scenarios.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here