Search Results for author: Chengfeng Zhao

Found 5 papers, 0 papers with code

LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free Environment

no code implementations27 Feb 2024 Yiming Ren, Xiao Han, Chengfeng Zhao, Jingya Wang, Lan Xu, Jingyi Yu, Yuexin Ma

For human-centric large-scale scenes, fine-grained modeling for 3D human global pose and shape is significant for scene understanding and can benefit many real-world applications.

Scene Understanding

I'M HOI: Inertia-aware Monocular Capture of 3D Human-Object Interactions

no code implementations10 Dec 2023 Chengfeng Zhao, Juze Zhang, Jiashen Du, Ziwei Shan, Junye Wang, Jingyi Yu, Jingya Wang, Lan Xu

In this paper, we present I'm-HOI, a monocular scheme to faithfully capture the 3D motions of both the human and object in a novel setting: using a minimal amount of RGB camera and object-mounted Inertial Measurement Unit (IMU).

Human-Object Interaction Detection Object +1

LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse Inertial and LiDAR Sensors

no code implementations30 May 2022 Yiming Ren, Chengfeng Zhao, Yannan He, Peishan Cong, Han Liang, Jingyi Yu, Lan Xu, Yuexin Ma

We propose a multi-sensor fusion method for capturing challenging 3D human motions with accurate consecutive local poses and global trajectories in large-scale scenarios, only using single LiDAR and 4 IMUs, which are set up conveniently and worn lightly.

Sensor Fusion Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.