Search Results for author: Ziqiao Peng

Found 3 papers, 3 papers with code

SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis

1 code implementation29 Nov 2023 Ziqiao Peng, Wentao Hu, Yue Shi, Xiangyu Zhu, Xiaomei Zhang, Hao Zhao, Jun He, Hongyan Liu, Zhaoxin Fan

A lifelike talking head requires synchronized coordination of subject identity, lip movements, facial expressions, and head poses.

Talking Face Generation Talking Head Generation

SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking Faces

1 code implementation19 Jun 2023 Ziqiao Peng, Yihao Luo, Yue Shi, Hao Xu, Xiangyu Zhu, Jun He, Hongyan Liu, Zhaoxin Fan

To enhance the visual accuracy of generated lip movement while reducing the dependence on labeled data, we propose a novel framework SelfTalk, by involving self-supervision in a cross-modals network system to learn 3D talking faces.

3D Face Animation Lip Reading

EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation

2 code implementations ICCV 2023 Ziqiao Peng, HaoYu Wu, Zhenbo Song, Hao Xu, Xiangyu Zhu, Jun He, Hongyan Liu, Zhaoxin Fan

Specifically, we introduce the emotion disentangling encoder (EDE) to disentangle the emotion and content in the speech by cross-reconstructed speech signals with different emotion labels.

3D Face Animation Disentanglement

Cannot find the paper you are looking for? You can Submit a new open access paper.