3D Multi-Person Pose Estimation (absolute)
12 papers with code • 1 benchmarks • 2 datasets
This task aims to solve absolute 3D multi-person pose Estimation (camera-centric coordinates). No ground truth human bounding box and human root joint coordinates are used during testing stage.
( Image credit: RootNet )
Latest papers
VirtualPose: Learning Generalizable 3D Human Pose Models from Virtual Data
While monocular 3D pose estimation seems to have achieved very accurate results on the public datasets, their generalization ability is largely overlooked.
Dual networks based 3D Multi-Person Pose Estimation from Monocular Video
Most of the methods focus on single persons, which estimate the poses in the person-centric coordinates, i. e., the coordinates based on the center of the target person.
Distribution-Aware Single-Stage Models for Multi-Person 3D Pose Estimation
In this paper, we present a novel Distribution-Aware Single-stage (DAS) model for tackling the challenging multi-person 3D pose estimation problem.
Monocular 3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks
Besides the integration of top-down and bottom-up networks, unlike existing pose discriminators that are designed solely for single person, and consequently cannot assess natural inter-person interactions, we propose a two-person pose discriminator that enforces natural two-person interactions.
Graph and Temporal Convolutional Networks for 3D Multi-person Pose Estimation in Monocular Videos
To tackle this problem, we propose a novel framework integrating graph convolutional networks (GCNs) and temporal convolutional networks (TCNs) to robustly estimate camera-centric multi-person 3D poses that do not require camera parameters.
SMAP: Single-Shot Multi-Person Absolute 3D Pose Estimation
Recovering multi-person 3D poses with absolute scales from a single RGB image is a challenging problem due to the inherent depth and scale ambiguity from a single view.
HDNet: Human Depth Estimation for Multi-Person Camera-Space Localization
Current works on multi-person 3D pose estimation mainly focus on the estimation of the 3D joint locations relative to the root joint and ignore the absolute locations of each pose.
Multi-Person Absolute 3D Human Pose Estimation with Weak Depth Supervision
In 3D human pose estimation one of the biggest problems is the lack of large, diverse datasets.
Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image
Although significant improvement has been achieved recently in 3D human pose estimation, most of the previous methods only treat a single-person case.
RNN-DBSCAN: A Density-Based Clustering Algorithm Using Reverse Nearest Neighbor Density Estimates
First, problem complexity is reduced to the use of a single parameter (choice of k nearest neighbors), and second, an improved ability for handling large variations in cluster density (heterogeneous density).