Search Results for author: Juze Zhang

Found 8 papers, 2 papers with code

BOTH2Hands: Inferring 3D Hands from Both Text Prompts and Body Dynamics

1 code implementation13 Dec 2023 Wenqian Zhang, Molin Huang, Yuxuan Zhou, Juze Zhang, Jingyi Yu, Jingya Wang, Lan Xu

We further provide a strong baseline method, BOTH2Hands, for the novel task: generating vivid two-hand motions from both implicit body dynamics and explicit text prompts.

Motion Synthesis

I'M HOI: Inertia-aware Monocular Capture of 3D Human-Object Interactions

no code implementations10 Dec 2023 Chengfeng Zhao, Juze Zhang, Jiashen Du, Ziwei Shan, Junye Wang, Jingyi Yu, Jingya Wang, Lan Xu

In this paper, we present I'm-HOI, a monocular scheme to faithfully capture the 3D motions of both the human and object in a novel setting: using a minimal amount of RGB camera and object-mounted Inertial Measurement Unit (IMU).

Human-Object Interaction Detection Object +1

IKOL: Inverse kinematics optimization layer for 3D human pose and shape estimation via Gauss-Newton differentiation

1 code implementation2 Feb 2023 Juze Zhang, Ye Shi, Yuexin Ma, Lan Xu, Jingyi Yu, Jingya Wang

This paper presents an inverse kinematic optimization layer (IKOL) for 3D human pose and shape estimation that leverages the strength of both optimization- and regression-based methods within an end-to-end framework.

3D human pose and shape estimation regression

NeuralDome: A Neural Modeling Pipeline on Multi-View Human-Object Interactions

no code implementations CVPR 2023 Juze Zhang, Haimin Luo, Hongdi Yang, Xinru Xu, Qianyang Wu, Ye Shi, Jingyi Yu, Lan Xu, Jingya Wang

We construct a dense multi-view dome to acquire a complex human object interaction dataset, named HODome, that consists of $\sim$75M frames on 10 subjects interacting with 23 objects.

Human-Object Interaction Detection

Weakly Supervised 3D Multi-person Pose Estimation for Large-scale Scenes based on Monocular Camera and Single LiDAR

no code implementations30 Nov 2022 Peishan Cong, Yiteng Xu, Yiming Ren, Juze Zhang, Lan Xu, Jingya Wang, Jingyi Yu, Yuexin Ma

Motivated by this, we propose a monocular camera and single LiDAR-based method for 3D multi-person pose estimation in large-scale scenes, which is easy to deploy and insensitive to light.

3D Multi-Person Pose Estimation 3D Pose Estimation +2

Mutual Adaptive Reasoning for Monocular 3D Multi-Person Pose Estimation

no code implementations16 Jul 2022 Juze Zhang, Jingya Wang, Ye Shi, Fei Gao, Lan Xu, Jingyi Yu

This method first uses 2. 5D pose and geometry information to infer camera-centric root depths in a forward pass, and then exploits the root depths to further improve representation learning of 2. 5D pose estimation in a backward pass.

3D Multi-Person Pose Estimation Depth Estimation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.