Search Results for author: Haotian Yang

Found 7 papers, 3 papers with code

FaceScape: a Large-scale High Quality 3D Face Dataset and Detailed Riggable 3D Face Prediction

1 code implementation CVPR 2020 Haotian Yang, Hao Zhu, Yanru Wang, Mingkai Huang, Qiu Shen, Ruigang Yang, Xun Cao

In this paper, we present a large-scale detailed 3D face dataset, FaceScape, and propose a novel algorithm that is able to predict elaborate riggable 3D face models from a single image input.

FaceScape: 3D Facial Dataset and Benchmark for Single-View 3D Face Reconstruction

1 code implementation1 Nov 2021 Hao Zhu, Haotian Yang, Longwei Guo, Yidi Zhang, Yanru Wang, Mingkai Huang, Menghua Wu, Qiu Shen, Ruigang Yang, Xun Cao

By training on FaceScape data, a novel algorithm is proposed to predict elaborate riggable 3D face models from a single image input.

3D Face Reconstruction 3D Reconstruction

Detailed Facial Geometry Recovery from Multi-View Images by Learning an Implicit Function

1 code implementation4 Jan 2022 Yunze Xiao, Hao Zhu, Haotian Yang, Zhengyu Diao, Xiangju Lu, Xun Cao

By fitting a 3D morphable model from multi-view images, the features of multiple images are extracted and aggregated in the mesh-attached UV space, which makes the implicit function more effective in recovering detailed facial shape.

Detailed Avatar Recovery from Single Image

no code implementations6 Aug 2021 Hao Zhu, Xinxin Zuo, Haotian Yang, Sen Wang, Xun Cao, Ruigang Yang

In this paper, we propose a novel learning-based framework that combines the robustness of the parametric model with the flexibility of free-form 3D deformation.

Towards Practical Capture of High-Fidelity Relightable Avatars

no code implementations8 Sep 2023 Haotian Yang, Mingwu Zheng, Wanquan Feng, Haibin Huang, Yu-Kun Lai, Pengfei Wan, Zhongyuan Wang, Chongyang Ma

Specifically, TRAvatar is trained with dynamic image sequences captured in a Light Stage under varying lighting conditions, enabling realistic relighting and real-time animation for avatars in diverse scenes.

VRMM: A Volumetric Relightable Morphable Head Model

no code implementations6 Feb 2024 Haotian Yang, Mingwu Zheng, Chongyang Ma, Yu-Kun Lai, Pengfei Wan, Haibin Huang

In this paper, we introduce the Volumetric Relightable Morphable Model (VRMM), a novel volumetric and parametric facial prior for 3D face modeling.

3D Face Reconstruction Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.