Search Results for author: Sicheng Xu

Found 5 papers, 2 papers with code

VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time

no code implementations16 Apr 2024 Sicheng Xu, Guojun Chen, Yu-Xiao Guo, Jiaolong Yang, Chong Li, Zhenyu Zang, Yizhong Zhang, Xin Tong, Baining Guo

We introduce VASA, a framework for generating lifelike talking faces with appealing visual affective skills (VAS) given a single static image and a speech audio clip.

AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image Collections

no code implementations5 Sep 2023 Yue Wu, Sicheng Xu, Jianfeng Xiang, Fangyun Wei, Qifeng Chen, Jiaolong Yang, Xin Tong

For the new task, we base our method on the generative radiance manifold representation and equip it with learnable facial and head-shoulder deformations.

RemoteTouch: Enhancing Immersive 3D Video Communication with Hand Touch

no code implementations28 Feb 2023 Yizhong Zhang, Zhiqi Li, Sicheng Xu, Chong Li, Jiaolong Yang, Xin Tong, Baining Guo

A key challenge in emulating the remote hand touch is the realistic rendering of the participant's hand and arm as the hand touches the screen.

Deep 3D Portrait from a Single Image

1 code implementation CVPR 2020 Sicheng Xu, Jiaolong Yang, Dong Chen, Fang Wen, Yu Deng, Yunde Jia, Xin Tong

We evaluate the accuracy of our method both in 3D and with pose manipulation tasks on 2D images.

Face Model Stereo Matching

Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set

4 code implementations20 Mar 2019 Yu Deng, Jiaolong Yang, Sicheng Xu, Dong Chen, Yunde Jia, Xin Tong

Recently, deep learning based 3D face reconstruction methods have shown promising results in both quality and efficiency. However, training deep neural networks typically requires a large volume of data, whereas face images with ground-truth 3D face shapes are scarce.

Ranked #3 on 3D Face Reconstruction on Florence (RMSE Cooperative metric)

3D Face Reconstruction Weakly-supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.