Search Results for author: Zhiyao Sun

Found 3 papers, 0 papers with code

DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models

no code implementations30 Sep 2023 Zhiyao Sun, Tian Lv, Sheng Ye, Matthieu Gaetan Lin, Jenny Sheng, Yu-Hui Wen, MinJing Yu, Yong-Jin Liu

The generation of stylistic 3D facial animations driven by speech poses a significant challenge as it requires learning a many-to-many mapping between speech, style, and the corresponding natural facial motion.

Continuously Controllable Facial Expression Editing in Talking Face Videos

no code implementations17 Sep 2022 Zhiyao Sun, Yu-Hui Wen, Tian Lv, Yanan sun, Ziyang Zhang, Yaoyuan Wang, Yong-Jin Liu

In this paper, we propose a high-quality facial expression editing method for talking face videos, allowing the user to control the target emotion in the edited video continuously.

Image-to-Image Translation Video Generation

Dynamic Neural Textures: Generating Talking-Face Videos with Continuously Controllable Expressions

no code implementations13 Apr 2022 Zipeng Ye, Zhiyao Sun, Yu-Hui Wen, Yanan sun, Tian Lv, Ran Yi, Yong-Jin Liu

In this paper, we propose a method to generate talking-face videos with continuously controllable expressions in real-time.

Video Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.