Search Results for author: Xiaochen Zhao

Found 11 papers, 3 papers with code

Stereo-Talker: Audio-driven 3D Human Synthesis with Prior-Guided Mixture-of-Experts

no code implementations31 Oct 2024 Xiang Deng, Youxin Pang, Xiaochen Zhao, Chao Xu, Lizhen Wang, Hongjiang Xiao, Shi Yan, Hongwen Zhang, Yebin Liu

This paper introduces Stereo-Talker, a novel one-shot audio-driven human video synthesis system that generates 3D talking videos with precise lip synchronization, expressive body gestures, temporally consistent photo-realistic quality, and continuous viewpoint control.

Language Modelling Large Language Model +1

InvertAvatar: Incremental GAN Inversion for Generalized Head Avatars

1 code implementation3 Dec 2023 Xiaochen Zhao, Jingxiang Sun, Lizhen Wang, Jinli Suo, Yebin Liu

While high fidelity and efficiency are central to the creation of digital head avatars, recent methods relying on 2D or 3D generative models often experience limitations such as shape distortion, expression inaccuracy, and identity flickering.

Image-to-Image Translation

HAvatar: High-fidelity Head Avatar via Facial Model Conditioned Neural Radiance Field

no code implementations29 Sep 2023 Xiaochen Zhao, Lizhen Wang, Jingxiang Sun, Hongwen Zhang, Jinli Suo, Yebin Liu

The problem of modeling an animatable 3D human head avatar under light-weight setups is of significant importance but has not been well solved.

Image-to-Image Translation

AvatarReX: Real-time Expressive Full-body Avatars

no code implementations8 May 2023 Zerong Zheng, Xiaochen Zhao, Hongwen Zhang, Boning Liu, Yebin Liu

We present AvatarReX, a new method for learning NeRF-based full-body avatars from video data.

Disentanglement

LatentAvatar: Learning Latent Expression Code for Expressive Neural Head Avatar

no code implementations2 May 2023 Yuelang Xu, Hongwen Zhang, Lizhen Wang, Xiaochen Zhao, Han Huang, GuoJun Qi, Yebin Liu

Existing approaches to animatable NeRF-based head avatars are either built upon face templates or use the expression coefficients of templates as the driving signal.

StyleAvatar: Real-time Photo-realistic Portrait Avatar from a Single Video

1 code implementation1 May 2023 Lizhen Wang, Xiaochen Zhao, Jingxiang Sun, Yuxiang Zhang, Hongwen Zhang, Tao Yu, Yebin Liu

Results and experiments demonstrate the superiority of our method in terms of image quality, full portrait video generation, and real-time re-animation compared to existing facial reenactment methods.

Face Reenactment Translation +1

AvatarMAV: Fast 3D Head Avatar Reconstruction Using Motion-Aware Neural Voxels

no code implementations23 Nov 2022 Yuelang Xu, Lizhen Wang, Xiaochen Zhao, Hongwen Zhang, Yebin Liu

AvatarMAV is the first to model both the canonical appearance and the decoupled expression motion by neural voxels for head avatar.

Vehicle Reconstruction and Texture Estimation Using Deep Implicit Semantic Template Mapping

no code implementations30 Nov 2020 Xiaochen Zhao, Zerong Zheng, Chaonan Ji, Zhenyi Liu, Siyou Lin, Tao Yu, Jinli Suo, Yebin Liu

We introduce VERTEX, an effective solution to recover 3D shape and intrinsic texture of vehicles from uncalibrated monocular input in real-world street environments.

NormalGAN: Learning Detailed 3D Human from a Single RGB-D Image

1 code implementation ECCV 2020 Lizhen Wang, Xiaochen Zhao, Tao Yu, Songtao Wang, Yebin Liu

We propose NormalGAN, a fast adversarial learning-based method to reconstruct the complete and detailed 3D human from a single RGB-D image.

3D Human Reconstruction Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.