no code implementations • 31 Oct 2024 • Xiang Deng, Youxin Pang, Xiaochen Zhao, Chao Xu, Lizhen Wang, Hongjiang Xiao, Shi Yan, Hongwen Zhang, Yebin Liu
This paper introduces Stereo-Talker, a novel one-shot audio-driven human video synthesis system that generates 3D talking videos with precise lip synchronization, expressive body gestures, temporally consistent photo-realistic quality, and continuous viewpoint control.
no code implementations • 16 Oct 2024 • Jingxiang Sun, Cheng Peng, Ruizhi Shao, Yuan-Chen Guo, Xiaochen Zhao, Yangguang Li, YanPei Cao, Bo Zhang, Yebin Liu
We introduce DreamCraft3D++, an extension of DreamCraft3D that enables efficient high-quality generation of complex 3D assets.
1 code implementation • 3 Dec 2023 • Xiaochen Zhao, Jingxiang Sun, Lizhen Wang, Jinli Suo, Yebin Liu
While high fidelity and efficiency are central to the creation of digital head avatars, recent methods relying on 2D or 3D generative models often experience limitations such as shape distortion, expression inaccuracy, and identity flickering.
no code implementations • 10 Oct 2023 • Minghan Qin, Yifan Liu, Yuelang Xu, Xiaochen Zhao, Yebin Liu, Haoqian Wang
One crucial aspect of 3D head avatar reconstruction lies in the details of facial expressions.
no code implementations • 29 Sep 2023 • Xiaochen Zhao, Lizhen Wang, Jingxiang Sun, Hongwen Zhang, Jinli Suo, Yebin Liu
The problem of modeling an animatable 3D human head avatar under light-weight setups is of significant importance but has not been well solved.
no code implementations • 8 May 2023 • Zerong Zheng, Xiaochen Zhao, Hongwen Zhang, Boning Liu, Yebin Liu
We present AvatarReX, a new method for learning NeRF-based full-body avatars from video data.
no code implementations • 2 May 2023 • Yuelang Xu, Hongwen Zhang, Lizhen Wang, Xiaochen Zhao, Han Huang, GuoJun Qi, Yebin Liu
Existing approaches to animatable NeRF-based head avatars are either built upon face templates or use the expression coefficients of templates as the driving signal.
1 code implementation • 1 May 2023 • Lizhen Wang, Xiaochen Zhao, Jingxiang Sun, Yuxiang Zhang, Hongwen Zhang, Tao Yu, Yebin Liu
Results and experiments demonstrate the superiority of our method in terms of image quality, full portrait video generation, and real-time re-animation compared to existing facial reenactment methods.
no code implementations • 23 Nov 2022 • Yuelang Xu, Lizhen Wang, Xiaochen Zhao, Hongwen Zhang, Yebin Liu
AvatarMAV is the first to model both the canonical appearance and the decoupled expression motion by neural voxels for head avatar.
no code implementations • 30 Nov 2020 • Xiaochen Zhao, Zerong Zheng, Chaonan Ji, Zhenyi Liu, Siyou Lin, Tao Yu, Jinli Suo, Yebin Liu
We introduce VERTEX, an effective solution to recover 3D shape and intrinsic texture of vehicles from uncalibrated monocular input in real-world street environments.
1 code implementation • ECCV 2020 • Lizhen Wang, Xiaochen Zhao, Tao Yu, Songtao Wang, Yebin Liu
We propose NormalGAN, a fast adversarial learning-based method to reconstruct the complete and detailed 3D human from a single RGB-D image.