no code implementations • 15 Apr 2024 • Jenny Sheng, Matthieu Lin, Andrew Zhao, Kevin Pruvost, Yu-Hui Wen, Yangguang Li, Gao Huang, Yong-Jin Liu
This paper presents an exploration of preference learning in text-to-motion generation.
no code implementations • 19 Dec 2023 • Yuze He, Yushi Bai, Matthieu Lin, Jenny Sheng, Yubin Hu, Qi Wang, Yu-Hui Wen, Yong-Jin Liu
By lifting the pre-trained 2D diffusion models into Neural Radiance Fields (NeRFs), text-to-3D generation methods have made great progress.
no code implementations • 30 Sep 2023 • Zhiyao Sun, Tian Lv, Sheng Ye, Matthieu Gaetan Lin, Jenny Sheng, Yu-Hui Wen, MinJing Yu, Yong-Jin Liu
The generation of stylistic 3D facial animations driven by speech poses a significant challenge as it requires learning a many-to-many mapping between speech, style, and the corresponding natural facial motion.
1 code implementation • 14 Sep 2023 • Sheng Ye, Yubin Hu, Matthieu Lin, Yu-Hui Wen, Wang Zhao, Yong-Jin Liu, Wenping Wang
To enhance the normal priors, we introduce a simple yet effective image sharpening and denoising technique, coupled with a network that estimates the pixel-wise uncertainty of the predicted surface normal vectors.
1 code implementation • 18 Aug 2023 • Yubin Hu, Sheng Ye, Wang Zhao, Matthieu Lin, Yuze He, Yu-Hui Wen, Ying He, Yong-Jin Liu
In this paper, we propose a novel framework, empowered by a 2D diffusion-based in-painting model, to reconstruct complete surfaces for the hidden parts of objects.
no code implementations • 17 Sep 2022 • Zhiyao Sun, Yu-Hui Wen, Tian Lv, Yanan sun, Ziyang Zhang, Yaoyuan Wang, Yong-Jin Liu
In this paper, we propose a high-quality facial expression editing method for talking face videos, allowing the user to control the target emotion in the edited video continuously.
no code implementations • 13 Apr 2022 • Zipeng Ye, Zhiyao Sun, Yu-Hui Wen, Yanan sun, Tian Lv, Ran Yi, Yong-Jin Liu
In this paper, we propose a method to generate talking-face videos with continuously controllable expressions in real-time.
1 code implementation • 11 Mar 2022 • Aihua Mao, Zihui Du, Yu-Hui Wen, Jun Xuan, Yong-Jin Liu
By considering noisy point clouds as a joint distribution of clean points and noise, the denoised results can be derived from disentangling the noise counterpart from latent point representation, and the mapping between Euclidean and latent spaces is modeled by normalizing flows.
no code implementations • CVPR 2021 • Yu-Hui Wen, Zhipeng Yang, Hongbo Fu, Lin Gao, Yanan sun, Yong-Jin Liu
Motion style transfer is an important problem in many computer graphics and computer vision applications, including human animation, games, and robotics.