Search Results for author: Eric R. Chan

Found 10 papers, 4 papers with code

Single-Shot Implicit Morphable Faces with Consistent Texture Parameterization

no code implementations4 May 2023 Connor Z. Lin, Koki Nagano, Jan Kautz, Eric R. Chan, Umar Iqbal, Leonidas Guibas, Gordon Wetzstein, Sameh Khamis

To tackle this problem, we propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.

Face Model Face Reconstruction

Real-Time Radiance Fields for Single-Image Portrait View Synthesis

no code implementations3 May 2023 Alex Trevithick, Matthew Chan, Michael Stengel, Eric R. Chan, Chao Liu, Zhiding Yu, Sameh Khamis, Manmohan Chandraker, Ravi Ramamoorthi, Koki Nagano

We present a one-shot method to infer and render a photorealistic 3D representation from a single unposed image (e. g., face portrait) in real-time.

Data Augmentation Novel View Synthesis

Generative Neural Articulated Radiance Fields

no code implementations28 Jun 2022 Alexander W. Bergman, Petr Kellnhofer, Wang Yifan, Eric R. Chan, David B. Lindell, Gordon Wetzstein

Unsupervised learning of 3D-aware generative adversarial networks (GANs) using only collections of single-view 2D photographs has very recently made much progress.

3D GAN Inversion for Controllable Portrait Image Animation

no code implementations25 Mar 2022 Connor Z. Lin, David B. Lindell, Eric R. Chan, Gordon Wetzstein

Portrait image animation enables the post-capture adjustment of these attributes from a single image while maintaining a photorealistic reconstruction of the subject's likeness or identity.

Attribute Generative Adversarial Network +2

ACORN: Adaptive Coordinate Networks for Neural Scene Representation

1 code implementation6 May 2021 Julien N. P. Martel, David B. Lindell, Connor Z. Lin, Eric R. Chan, Marco Monteiro, Gordon Wetzstein

Here, we introduce a new hybrid implicit-explicit network architecture and training strategy that adaptively allocates resources during training and inference based on the local complexity of a signal of interest.

3D Shape Representation Representation Learning

MetaSDF: Meta-learning Signed Distance Functions

2 code implementations NeurIPS 2020 Vincent Sitzmann, Eric R. Chan, Richard Tucker, Noah Snavely, Gordon Wetzstein

Neural implicit shape representations are an emerging paradigm that offers many potential benefits over conventional discrete representations, including memory efficiency at a high spatial resolution.

Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.