Hand Pose Estimation
87 papers with code • 10 benchmarks • 22 datasets
Hand pose estimation is the task of finding the joints of the hand from an image or set of video frames.
( Image credit: Pose-REN )
Libraries
Use these libraries to find Hand Pose Estimation models and implementationsDatasets
Latest papers
Harmonious Feature Learning for Interactive Hand-Object Pose Estimation
Notably, the performance of our model on hand pose estimation even surpasses that of existing works that only perform the single-hand pose estimation task.
HandR2N2: Iterative 3D Hand Pose Estimation Using a Residual Recurrent Neural Network
3D hand pose estimation is a critical task in various human-computer interaction applications.
3DGazeNet: Generalizing Gaze Estimation with Weak-Supervision from Synthetic Views
To close the gap between image domains, we create a large-scale dataset of diverse faces with gaze pseudo-annotations, which we extract based on the 3D geometry of the scene, and design a multi-view supervision framework to balance their effect during training.
Using Hand Pose Estimation To Automate Open Surgery Training Feedback
Conclusion: This research demonstrates the benefit of pose estimations for open surgery by analyzing their effectiveness in gesture segmentation and skill assessment.
THOR-Net: End-to-end Graformer-based Realistic Two Hands and Object Reconstruction with Self-supervision
In the features extraction stage, a Keypoint RCNN is used to extract 2D poses, features maps, heatmaps, and bounding boxes from a monocular RGB image.
DART: Articulated Hand Model with Diverse Accessories and Rich Textures
Unity GUI is also provided to generate synthetic hand data with user-defined settings, e. g., pose, camera, background, lighting, textures, and accessories.
VL4Pose: Active Learning Through Out-Of-Distribution Detection For Pose Estimation
We begin with a simple premise: pose estimators often predict incoherent poses for out-of-distribution samples.
Hierarchical Temporal Transformer for 3D Hand Pose Estimation and Action Recognition from Egocentric RGB Videos
Understanding dynamic hand motions and actions from egocentric RGB videos is a fundamental yet challenging task due to self-occlusion and ambiguity.
TempCLR: Reconstructing Hands via Time-Coherent Contrastive Learning
We introduce TempCLR, a new time-coherent contrastive learning approach for the structured regression task of 3D hand reconstruction.
In-Hand Pose Estimation and Pin Inspection for Insertion of Through-Hole Components
A deep learning segmentation of the pins is performed and the inspection pose is found by simulation.