3D Multi-Person Pose Estimation (absolute)
11 papers with code • 1 benchmarks • 2 datasets
This task aims to solve absolute 3D multi-person pose Estimation (camera-centric coordinates). No ground truth human bounding box and human root joint coordinates are used during testing stage.
( Image credit: RootNet )
Most implemented papers
Towards 3D Human Pose Estimation in the Wild: a Weakly-supervised Approach
We propose a weakly-supervised transfer learning method that uses mixed 2D and 3D labels in a unified deep neutral network that presents two-stage cascaded structure.
Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB
Our approach uses novel occlusion-robust pose-maps (ORPM) which enable full body pose inference even under strong partial occlusions by other people and objects in the scene.
Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image
Although significant improvement has been achieved recently in 3D human pose estimation, most of the previous methods only treat a single-person case.
RNN-DBSCAN: A Density-Based Clustering Algorithm Using Reverse Nearest Neighbor Density Estimates
First, problem complexity is reduced to the use of a single parameter (choice of k nearest neighbors), and second, an improved ability for handling large variations in cluster density (heterogeneous density).
Multi-Person Absolute 3D Human Pose Estimation with Weak Depth Supervision
In 3D human pose estimation one of the biggest problems is the lack of large, diverse datasets.
HDNet: Human Depth Estimation for Multi-Person Camera-Space Localization
Current works on multi-person 3D pose estimation mainly focus on the estimation of the 3D joint locations relative to the root joint and ignore the absolute locations of each pose.
Graph and Temporal Convolutional Networks for 3D Multi-person Pose Estimation in Monocular Videos
To tackle this problem, we propose a novel framework integrating graph convolutional networks (GCNs) and temporal convolutional networks (TCNs) to robustly estimate camera-centric multi-person 3D poses that do not require camera parameters.
Monocular 3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks
Besides the integration of top-down and bottom-up networks, unlike existing pose discriminators that are designed solely for single person, and consequently cannot assess natural inter-person interactions, we propose a two-person pose discriminator that enforces natural two-person interactions.
Distribution-Aware Single-Stage Models for Multi-Person 3D Pose Estimation
In this paper, we present a novel Distribution-Aware Single-stage (DAS) model for tackling the challenging multi-person 3D pose estimation problem.
Dual networks based 3D Multi-Person Pose Estimation from Monocular Video
Most of the methods focus on single persons, which estimate the poses in the person-centric coordinates, i. e., the coordinates based on the center of the target person.