Search Results for author: Soyong Shin

Found 4 papers, 3 papers with code

WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion

1 code implementation12 Dec 2023 Soyong Shin, Juyong Kim, Eni Halilaj, Michael J. Black

We address these limitations with WHAM (World-grounded Humans with Accurate Motion), which accurately and efficiently reconstructs 3D human motion in a global coordinate system from video.

3D Human Pose Estimation

Markerless Motion Tracking with Noisy Video and IMU Data

1 code implementation IEEE Transactions on Biomedical Engineering 2023 Soyong Shin, Zhixiong Li, Eni Halilaj

We propose deep learning models to estimate human movement with noisy data from videos (VideoNet), inertial sensors (IMUNet), and a combination of the two (FusionNet), obviating the need for careful calibration.

Multi-view Human Pose and Shape Estimation Using Learnable Volumetric Aggregation

no code implementations26 Nov 2020 Soyong Shin, Eni Halilaj

In this paper, we propose a learnable volumetric aggregation approach to reconstruct 3D human body pose and shape from calibrated multi-view images.

Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.