Search Results for author: Yannan He

Found 5 papers, 1 papers with code

NRDF: Neural Riemannian Distance Fields for Learning Articulated Pose Priors

no code implementations5 Mar 2024 Yannan He, Garvita Tiwari, Tolga Birdal, Jan Eric Lenssen, Gerard Pons-Moll

Faithfully modeling the space of articulations is a crucial task that allows recovery and generation of realistic poses, and remains a notorious challenge.

Pose Estimation

LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse Inertial and LiDAR Sensors

no code implementations30 May 2022 Yiming Ren, Chengfeng Zhao, Yannan He, Peishan Cong, Han Liang, Jingyi Yu, Lan Xu, Yuexin Ma

We propose a multi-sensor fusion method for capturing challenging 3D human motions with accurate consecutive local poses and global trajectories in large-scale scenarios, only using single LiDAR and 4 IMUs, which are set up conveniently and worn lightly.

Sensor Fusion Translation

Interaction Replica: Tracking Human-Object Interaction and Scene Changes From Human Motion

no code implementations5 May 2022 Vladimir Guzov, Julian Chibane, Riccardo Marin, Yannan He, Yunus Saracoglu, Torsten Sattler, Gerard Pons-Moll

In order for widespread adoption of such emerging applications, the sensor setup used to capture the interactions needs to be inexpensive and easy-to-use for non-expert users.

Human-Object Interaction Detection Object +2

ChallenCap: Monocular 3D Capture of Challenging Human Performances using Multi-Modal References

2 code implementations CVPR 2021 Yannan He, Anqi Pang, Xin Chen, Han Liang, Minye Wu, Yuexin Ma, Lan Xu

We propose a hybrid motion inference stage with a generation network, which utilizes a temporal encoder-decoder to extract the motion details from the pair-wise sparse-view reference, as well as a motion discriminator to utilize the unpaired marker-based references to extract specific challenging motion characteristics in a data-driven manner.

Cannot find the paper you are looking for? You can Submit a new open access paper.