Search Results for author: Yifeng Ma

Found 3 papers, 1 papers with code

DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models

no code implementations15 Dec 2023 Yifeng Ma, Shiwei Zhang, Jiayu Wang, Xiang Wang, Yingya Zhang, Zhidong Deng

In this work, we propose a DreamTalk framework to fulfill this gap, which employs meticulous design to unlock the potential of diffusion models in generating expressive talking heads.

Denoising Talking Head Generation

TalkCLIP: Talking Head Generation with Text-Guided Expressive Speaking Styles

no code implementations1 Apr 2023 Yifeng Ma, Suzhen Wang, Yu Ding, Bowen Ma, Tangjie Lv, Changjie Fan, Zhipeng Hu, Zhidong Deng, Xin Yu

In this work, we propose an expression-controllable one-shot talking head method, dubbed TalkCLIP, where the expression in a speech is specified by the natural language.

2D Semantic Segmentation task 3 (25 classes) Talking Head Generation

StyleTalk: One-shot Talking Head Generation with Controllable Speaking Styles

1 code implementation3 Jan 2023 Yifeng Ma, Suzhen Wang, Zhipeng Hu, Changjie Fan, Tangjie Lv, Yu Ding, Zhidong Deng, Xin Yu

In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio.

Talking Face Generation Talking Head Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.