Search Results for author: Shaoting Zhu

Found 1 papers, 0 papers with code

Multimodal-driven Talking Face Generation via a Unified Diffusion-based Generator

no code implementations4 May 2023 Chao Xu, Shaoting Zhu, Junwei Zhu, Tianxin Huang, Jiangning Zhang, Ying Tai, Yong liu

More specifically, given a textured face as the source and the rendered face projected from the desired 3DMM coefficients as the target, our proposed Texture-Geometry-aware Diffusion Model decomposes the complex transfer problem into multi-conditional denoising process, where a Texture Attention-based module accurately models the correspondences between appearance and geometry cues contained in source and target conditions, and incorporate extra implicit information for high-fidelity talking face generation.

Denoising Face Swapping +1

Cannot find the paper you are looking for? You can Submit a new open access paper.