Search Results for author: Lingyu Wei

Found 5 papers, 1 papers with code

Normalized Avatar Synthesis Using StyleGAN and Perceptual Refinement

no code implementations CVPR 2021 Huiwen Luo, Koki Nagano, Han-Wei Kung, Mclean Goldwhite, Qingguo Xu, Zejian Wang, Lingyu Wei, Liwen Hu, Hao Li

Cutting-edge 3D face reconstruction methods use non-linear morphable face models combined with GAN-based decoders to capture the likeness and details of a person but fail to produce neutral head models with unshaded albedo textures which is critical for creating relightable and animation-friendly avatars for integration in virtual environments.

3D Face Reconstruction Face Model

Real-Time Hair Rendering using Sequential Adversarial Networks

no code implementations ECCV 2018 Lingyu Wei, Liwen Hu, Vladimir Kim, Ersin Yumer, Hao Li

To handle the diversity of hairstyles and its appearance complexity, we disentangle hair structure, color, and illumination properties using a sequential GAN architecture and a semi-supervised training approach.

Photorealistic Facial Texture Inference Using Deep Neural Networks

1 code implementation CVPR 2017 Shunsuke Saito, Lingyu Wei, Liwen Hu, Koki Nagano, Hao Li

We present a data-driven inference method that can synthesize a photorealistic texture map of a complete 3D face model given a partial 2D view of a person in the wild.

Face Model

Capturing Dynamic Textured Surfaces of Moving Targets

no code implementations11 Apr 2016 Ruizhe Wang, Lingyu Wei, Etienne Vouga, Qi-Xing Huang, Duygu Ceylan, Gerard Medioni, Hao Li

We present an end-to-end system for reconstructing complete watertight and textured models of moving subjects such as clothed humans and animals, using only three or four handheld sensors.

Dense Human Body Correspondences Using Convolutional Networks

no code implementations CVPR 2016 Lingyu Wei, Qi-Xing Huang, Duygu Ceylan, Etienne Vouga, Hao Li

We propose a deep learning approach for finding dense correspondences between 3D scans of people.

Cannot find the paper you are looking for? You can Submit a new open access paper.