Search Results for author: Yesheng Chai

Found 4 papers, 0 papers with code

OSM-Net: One-to-Many One-shot Talking Head Generation with Spontaneous Head Motions

no code implementations28 Sep 2023 Jin Liu, Xi Wang, Xiaomeng Fu, Yesheng Chai, Cai Yu, Jiao Dai, Jizhong Han

Other works construct one-to-one mapping between audio signal and head motion sequences, introducing ambiguity correspondences into the mapping since people can behave differently in head motions when speaking the same content.

Talking Head Generation Video Generation

MFR-Net: Multi-faceted Responsive Listening Head Generation via Denoising Diffusion Model

no code implementations31 Aug 2023 Jin Liu, Xi Wang, Xiaomeng Fu, Yesheng Chai, Cai Yu, Jiao Dai, Jizhong Han

Responsive listening head generation is an important task that aims to model face-to-face communication scenarios by generating a listener head video given a speaker video and a listener head image.

Denoising

FONT: Flow-guided One-shot Talking Head Generation with Natural Head Motions

no code implementations31 Mar 2023 Jin Liu, Xi Wang, Xiaomeng Fu, Yesheng Chai, Cai Yu, Jiao Dai, Jizhong Han

Specifically, the head pose prediction module is designed to generate head pose sequences from the source face and driving audio.

Pose Prediction Talking Head Generation +1

OPT: One-shot Pose-Controllable Talking Head Generation

no code implementations16 Feb 2023 Jin Liu, Xi Wang, Xiaomeng Fu, Yesheng Chai, Cai Yu, Jiao Dai, Jizhong Han

To solve the identity mismatch problem and achieve high-quality free pose control, we present One-shot Pose-controllable Talking head generation network (OPT).

Disentanglement Talking Head Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.