Hand pose estimation is the task of finding the joints of the hand from an image or set of video frames.
( Image credit: Pose-REN )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
The first stage of neural network is used to extract hand patches and estimate the initial hand poses from the depth images in an iteration fashion.
SOTA for Hand Pose Estimation on HANDS 2019
3D hand pose estimation has received a lot of attention for its wide range of applications and has made great progress owing to the development of deep learning.
Further, we design a multi-source discriminator with hand poses, bones and the input image as input to capture intrinsic features, which distinguishes the predicted 3D hand pose from the ground-truth and leads to anthropomorphically valid hand poses.
This work addresses the challenging problem of unconstrained 3D hand pose estimation using monocular RGB images.
A hand, which is an articulated object, is composed of six local parts: the palm and five independent fingers.
Despite great progress in 3D pose estimation from single-view images or videos, it remains a challenging task due to the substantial depth ambiguity and severe self-occlusions.
Hand pose estimation from monocular RGB inputs is a highly challenging task.
In contrast to existing research on 2D or 3D hand pose estimation from RGB or/and depth image data, HAMR can provide a more expressive and useful mesh representation for monocular hand image understanding.
Since the HFE and the HFD can be trained without 3D hand pose annotation, the proposed method is able to make the best of unannotated data during the training phase.
In this paper, we propose a new architecture called Adaptive Graphical Model Network (AGMN) to tackle the task of 2D hand pose estimation from a monocular RGB image.