3D Multi-Person Pose Estimation (root-relative)
11 papers with code • 1 benchmarks • 1 datasets
This task aims to solve root-relative 3D multi-person pose estimation (person-centric coordinate system). No ground truth human bounding box and human root joint coordinates are used during testing stage.
( Image credit: RootNet )
Latest papers with no code
Towards Robust and Smooth 3D Multi-Person Pose Estimation from Monocular Videos in the Wild
3D pose estimation is an invaluable task in computer vision with various practical applications.
Explicit Occlusion Reasoning for Multi-person 3D Human Pose Estimation
Inspired by the remarkable ability of humans to infer occluded joints from visible cues, we develop a method to explicitly model this process that significantly improves bottom-up multi-person human pose estimation with or without occlusions.
Dynamic Graph Reasoning for Multi-person 3D Pose Estimation
Finally, the 3D poses are decoded according to dynamic decoding graphs for each detected person.
Permutation-Invariant Relational Network for Multi-person 3D Pose Estimation
For this purpose, we build a residual-like permutation-invariant network that successfully refines potentially corrupted initial 3D poses estimated by an off-the-shelf detector.
Deep Monocular 3D Human Pose Estimation via Cascaded Dimension-Lifting
The 3D pose estimation from a single image is a challenging problem due to depth ambiguity.
PI-Net: Pose Interacting Network for Multi-Person Monocular 3D Pose Estimation
Our pose interacting network, or PI-Net, inputs the initial pose estimates of a variable number of interactees into a recurrent architecture used to refine the pose of the person-of-interest.
HMOR: Hierarchical Multi-Person Ordinal Relations for Monocular Multi-Person 3D Pose Estimation
The HMOR encodes interaction information as the ordinal relations of depths and angles hierarchically, which captures the body-part and joint level semantic and maintains global consistency at the same time.