Search Results for author: Zeng Huang

Found 15 papers, 8 papers with code

LightAvatar: Efficient Head Avatar as Dynamic Neural Light Field

1 code implementation26 Sep 2024 Huan Wang, Feitong Tan, Ziqian Bai, yinda zhang, Shichen Liu, Qiangeng Xu, Menglei Chai, Anish Prabhu, Rohit Pandey, Sean Fanello, Zeng Huang, Yun Fu

Recent works have shown that neural radiance fields (NeRFs) on top of parametric models have reached SOTA quality to build photorealistic head avatars from a monocular video.

Gaussian3Diff: 3D Gaussian Diffusion for 3D Full Head Synthesis and Editing

no code implementations5 Dec 2023 Yushi Lan, Feitong Tan, Di Qiu, Qiangeng Xu, Kyle Genova, Zeng Huang, Sean Fanello, Rohit Pandey, Thomas Funkhouser, Chen Change Loy, yinda zhang

We present a novel framework for generating photorealistic 3D human head and subsequently manipulating and reposing them with remarkable flexibility.

Face Model

Cross-Modal 3D Shape Generation and Manipulation

no code implementations24 Jul 2022 Zezhou Cheng, Menglei Chai, Jian Ren, Hsin-Ying Lee, Kyle Olszewski, Zeng Huang, Subhransu Maji, Sergey Tulyakov

In this paper, we propose a generic multi-modal generative model that couples the 2D modalities and implicit 3D representations through shared latent spaces.

3D Generation 3D Shape Generation

R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis

1 code implementation31 Mar 2022 Huan Wang, Jian Ren, Zeng Huang, Kyle Olszewski, Menglei Chai, Yun Fu, Sergey Tulyakov

On the other hand, Neural Light Field (NeLF) presents a more straightforward representation over NeRF in novel view synthesis -- the rendering of a pixel amounts to one single forward pass without ray-marching.

Novel View Synthesis

NeROIC: Neural Rendering of Objects from Online Image Collections

1 code implementation7 Jan 2022 Zhengfei Kuang, Kyle Olszewski, Menglei Chai, Zeng Huang, Panos Achlioptas, Sergey Tulyakov

We present a novel method to acquire object representations from online image collections, capturing high-quality geometry and material properties of arbitrary objects from photographs with varying cameras, illumination, and backgrounds.

Neural Rendering Novel View Synthesis +1

S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling

no code implementations CVPR 2021 Ze Yang, Shenlong Wang, Sivabalan Manivasagam, Zeng Huang, Wei-Chiu Ma, Xinchen Yan, Ersin Yumer, Raquel Urtasun

Constructing and animating humans is an important component for building virtual worlds in a wide variety of applications such as virtual reality or robotics testing in simulation.

Monocular Real-Time Volumetric Performance Capture

1 code implementation ECCV 2020 Ruilong Li, Yuliang Xiu, Shunsuke Saito, Zeng Huang, Kyle Olszewski, Hao Li

We present the first approach to volumetric performance capture and novel-view rendering at real-time speed from monocular video, eliminating the need for expensive multi-view systems or cumbersome pre-acquisition of a personalized template model.

3D Human Shape Estimation

ARCH: Animatable Reconstruction of Clothed Humans

1 code implementation CVPR 2020 Zeng Huang, Yuanlu Xu, Christoph Lassner, Hao Li, Tony Tung

In this paper, we propose ARCH (Animatable Reconstruction of Clothed Humans), a novel end-to-end framework for accurate reconstruction of animation-ready 3D clothed humans from a monocular image.

3D Object Reconstruction From A Single Image 3D Reconstruction

Learning Perspective Undistortion of Portraits

no code implementations ICCV 2019 Yajie Zhao, Zeng Huang, Tianye Li, Weikai Chen, Chloe LeGendre, Xinglei Ren, Jun Xing, Ari Shapiro, Hao Li

In contrast to the previous state-of-the-art approach, our method handles even portraits with extreme perspective distortion, as we avoid the inaccurate and error-prone step of first fitting a 3D face model.

3D Reconstruction Camera Calibration +2

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization

1 code implementation ICCV 2019 Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, Hao Li

We introduce Pixel-aligned Implicit Function (PIFu), a highly effective implicit representation that locally aligns pixels of 2D images with the global context of their corresponding 3D object.

3D Human Pose Estimation 3D Human Reconstruction +4

SiCloPe: Silhouette-Based Clothed People

1 code implementation CVPR 2019 Ryota Natsume, Shunsuke Saito, Zeng Huang, Weikai Chen, Chongyang Ma, Hao Li, Shigeo Morishima

The synthesized silhouettes which are the most consistent with the input segmentation are fed into a deep visual hull algorithm for robust 3D shape prediction.

Generative Adversarial Network Image-to-Image Translation

Deep Volumetric Video From Very Sparse Multi-View Performance Capture

no code implementations ECCV 2018 Zeng Huang, Tianye Li, Weikai Chen, Yajie Zhao, Jun Xing, Chloe LeGendre, Linjie Luo, Chongyang Ma, Hao Li

We present a deep learning-based volumetric capture approach for performance capture using a passive and highly sparse multi-view capture system.

Surface Reconstruction

Realistic Dynamic Facial Textures From a Single Image Using GANs

no code implementations ICCV 2017 Kyle Olszewski, Zimo Li, Chao Yang, Yi Zhou, Ronald Yu, Zeng Huang, Sitao Xiang, Shunsuke Saito, Pushmeet Kohli, Hao Li

By retargeting the PCA expression geometry from the source, as well as using the newly inferred texture, we can both animate the face and perform video face replacement on the source video using the target appearance.

3D geometry

Auto-Conditioned Recurrent Networks for Extended Complex Human Motion Synthesis

1 code implementation ICLR 2018 Zimo Li, Yi Zhou, Shuangjiu Xiao, Chong He, Zeng Huang, Hao Li

We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN).

Motion Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.