Hand pose estimation is the task of finding the joints of the hand from an image or set of video frames.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations.
#4 best model for Image-to-Image Translation on Cityscapes Photo-to-Labels
We present the first method to capture the 3D total motion of a target person from a monocular view input.
Official Torch7 implementation of "V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map", CVPR 2018
To overcome these weaknesses, we firstly cast the 3D hand and human pose estimation problem from a single depth map into a voxel-to-voxel prediction that uses a 3D voxelized grid and estimates the per-voxel likelihood for each keypoint.
SOTA for Hand Pose Estimation on ICVL Hands
DeepPrior is a simple approach based on Deep Learning that predicts the joint 3D locations of a hand given a depth map.
#4 best model for Hand Pose Estimation on MSRA Hands
Specifically, we decompose the pose parameters into a set of per-pixel estimations, i. e., 2D heat maps, 3D heat maps and unit 3D directional vector fields.
SOTA for Hand Pose Estimation on MSRA Hands
In this paper, we present a HAnd Mesh Recovery (HAMR) framework to tackle the problem of reconstructing the full 3D mesh of a human hand from a single RGB image.
The proposed method extracts regions from the feature maps of convolutional neural network under the guide of an initially estimated pose, generating more optimal and representative features for hand pose estimation.
#2 best model for Hand Pose Estimation on ICVL Hands