3D Multi-Person Pose Estimation
32 papers with code • 5 benchmarks • 4 datasets
This task aims to solve root-relative 3D multi-person pose estimation. No human bounding box and root joint coordinate groundtruth are used in testing time.
( Image credit: RootNet )
Libraries
Use these libraries to find 3D Multi-Person Pose Estimation models and implementationsSubtasks
Latest papers
Multi-HMR: Multi-Person Whole-Body Human Mesh Recovery in a Single Shot
We present Multi-HMR, a strong single-shot model for multi-person 3D human mesh recovery from a single RGB image.
Three Recipes for Better 3D Pseudo-GTs of 3D Human Mesh Estimation in the Wild
Recovering 3D human mesh in the wild is greatly challenging as in-the-wild (ITW) datasets provide only 2D pose ground truths (GTs).
Multi-Person 3D Pose and Shape Estimation via Inverse Kinematics and Refinement
To tackle the challenges, we propose a coarse-to-fine pipeline that benefits from 1) inverse kinematics from the occlusion-robust 3D skeleton estimation and 2) Transformer-based relation-aware refinement techniques.
AdaptivePose++: A Powerful Single-Stage Network for Multi-Person Pose Regression
With the proposed body representation, we further deliver a compact single-stage multi-person pose regression network, termed as AdaptivePose.
Faster VoxelPose: Real-time 3D Human Pose Estimation by Orthographic Projection
While the voxel-based methods have achieved promising results for multi-person 3D pose estimation from multi-cameras, they suffer from heavy computation burdens, especially for large scenes.
VirtualPose: Learning Generalizable 3D Human Pose Models from Virtual Data
While monocular 3D pose estimation seems to have achieved very accurate results on the public datasets, their generalization ability is largely overlooked.
Dual networks based 3D Multi-Person Pose Estimation from Monocular Video
Most of the methods focus on single persons, which estimate the poses in the person-centric coordinates, i. e., the coordinates based on the center of the target person.
Distribution-Aware Single-Stage Models for Multi-Person 3D Pose Estimation
In this paper, we present a novel Distribution-Aware Single-stage (DAS) model for tackling the challenging multi-person 3D pose estimation problem.
Direct Multi-view Multi-person 3D Pose Estimation
Instead of estimating 3D joint locations from costly volumetric representation or reconstructing the per-person 3D pose from multiple detected 2D poses as in previous methods, MvP directly regresses the multi-person 3D poses in a clean and efficient way, without relying on intermediate tasks.
SPEC: Seeing People in the Wild with an Estimated Camera
We then train a novel network that concatenates the camera calibration to the image features and uses these together to regress 3D body shape and pose.