3D Multi-Person Pose Estimation
32 papers with code • 5 benchmarks • 4 datasets
This task aims to solve root-relative 3D multi-person pose estimation. No human bounding box and root joint coordinate groundtruth are used in testing time.
( Image credit: RootNet )
Libraries
Use these libraries to find 3D Multi-Person Pose Estimation models and implementationsSubtasks
Latest papers
Graph-Based 3D Multi-Person Pose Estimation Using Multi-View Images
Following the top-down paradigm, we decompose the task into two stages, i. e. person localization and pose estimation.
Real-Time Multi-View 3D Human Pose Estimation using Semantic Feedback to Smart Edge Sensors
We present a novel method for estimation of 3D human poses from a multi-camera setup, employing distributed smart edge sensors coupled with a backend through a semantic feedback loop.
Body Meshes as Points
In this work, we present a single-stage model, Body Meshes as Points (BMP), to simplify the pipeline and lift both efficiency and performance.
AGORA: Avatars in Geography Optimized for Regression Analysis
Additionally, we fine-tune methods on AGORA and show improved performance on both AGORA and 3DPW, confirming the realism of the dataset.
PARE: Part Attention Regressor for 3D Human Body Estimation
Despite significant progress, we show that state of the art 3D human pose and shape estimation methods remain sensitive to partial occlusion and can produce dramatically wrong predictions although much of the body is observable.
Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes
Second, we propose a joint-based regressor that distinguishes a target person's feature from others.
Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo
Existing approaches for multi-view multi-person 3D pose estimation explicitly establish cross-view correspondences to group 2D pose detections from multiple camera views and solve for the 3D pose estimation for each person.
Monocular 3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks
Besides the integration of top-down and bottom-up networks, unlike existing pose discriminators that are designed solely for single person, and consequently cannot assess natural inter-person interactions, we propose a two-person pose discriminator that enforces natural two-person interactions.
Graph and Temporal Convolutional Networks for 3D Multi-person Pose Estimation in Monocular Videos
To tackle this problem, we propose a novel framework integrating graph convolutional networks (GCNs) and temporal convolutional networks (TCNs) to robustly estimate camera-centric multi-person 3D poses that do not require camera parameters.
Temporal Smoothing for 3D Human Pose Estimation and Localization for Occluded People
In multi-person pose estimation actors can be heavily occluded, even become fully invisible behind another person.