3D Interacting Hand Pose Estimation
10 papers with code • 1 benchmarks • 1 datasets
Most implemented papers
InterHand2.6M: A Dataset and Baseline for 3D Interacting Hand Pose Estimation from a Single RGB Image
Therefore, we firstly propose (1) a large-scale dataset, InterHand2. 6M, and (2) a baseline network, InterNet, for 3D interacting hand pose estimation from a single RGB image.
Keypoint Transformer: Solving Joint Identification in Challenging Hands and Object Interactions for Accurate 3D Pose Estimation
We propose a robust and accurate method for estimating the 3D poses of two hands in close interaction from a single color image.
Learning to Disambiguate Strongly Interacting Hands via Probabilistic Per-pixel Part Segmentation
In natural conversation and interaction, our hands often overlap or are in contact with each other.
Interacting Attention Graph for Single Image Two-Hand Reconstruction
To solve occlusion and interaction challenges of two-hand reconstruction, we introduce two novel attention based modules in each upsampling step of the original GCN.
3D Interacting Hand Pose Estimation by Hand De-occlusion and Removal
Unlike most previous works that directly predict the 3D poses of two interacting hands simultaneously, we propose to decompose the challenging interacting hand pose estimation task and estimate the pose of each hand separately.
Decoupled Iterative Refinement Framework for Interacting Hands Reconstruction from a Single RGB Image
On the other hand, there are complex spatial relationship between interacting hands, which significantly increases the solution space of hand poses and increases the difficulty of network learning.
ACR: Attention Collaboration-based Regressor for Arbitrary Two-Hand Reconstruction
Our method significantly outperforms the best interacting-hand approaches on the InterHand2. 6M dataset while yielding comparable performance with the state-of-the-art single-hand methods on the FreiHand dataset.
A2J-Transformer: Anchor-to-Joint Transformer Network for 3D Interacting Hand Pose Estimation from a Single RGB Image
3D interacting hand pose estimation from a single RGB image is a challenging task, due to serious self-occlusion and inter-occlusion towards hands, confusing similar appearance patterns between 2 hands, ill-posed joint position mapping from 2D to 3D, etc.. To address these, we propose to extend A2J-the state-of-the-art depth-based 3D single hand pose estimation method-to RGB domain under interacting hand condition.
Extract-and-Adaptation Network for 3D Interacting Hand Mesh Recovery
Our two novel tokens are from a combination of separated two hand features; hence, it is much more robust to the distant token problem.
RenderIH: A Large-scale Synthetic Dataset for 3D Interacting Hand Pose Estimation
The current interacting hand (IH) datasets are relatively simplistic in terms of background and texture, with hand joints being annotated by a machine annotator, which may result in inaccuracies, and the diversity of pose distribution is limited.