Search Results for author: Alex Trevithick

Found 6 papers, 1 papers with code

What You See is What You GAN: Rendering Every Pixel for High-Fidelity Geometry in 3D GANs

no code implementations4 Jan 2024 Alex Trevithick, Matthew Chan, Towaki Takikawa, Umar Iqbal, Shalini De Mello, Manmohan Chandraker, Ravi Ramamoorthi, Koki Nagano

3D-aware Generative Adversarial Networks (GANs) have shown remarkable progress in learning to generate multi-view-consistent images and 3D geometries of scenes from collections of 2D images via neural volume rendering.

Neural Rendering Super-Resolution

PVP: Personalized Video Prior for Editable Dynamic Portraits using StyleGAN

no code implementations29 Jun 2023 Kai-En Lin, Alex Trevithick, Keli Cheng, Michel Sarkis, Mohsen Ghafoorian, Ning Bi, Gerhard Reitmayr, Ravi Ramamoorthi

In this work, our goal is to take as input a monocular video of a face, and create an editable dynamic portrait able to handle extreme head poses.

Face Generation

Real-Time Radiance Fields for Single-Image Portrait View Synthesis

no code implementations3 May 2023 Alex Trevithick, Matthew Chan, Michael Stengel, Eric R. Chan, Chao Liu, Zhiding Yu, Sameh Khamis, Manmohan Chandraker, Ravi Ramamoorthi, Koki Nagano

We present a one-shot method to infer and render a photorealistic 3D representation from a single unposed image (e. g., face portrait) in real-time.

Data Augmentation Novel View Synthesis

NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from 3D-aware Diffusion

no code implementations20 Feb 2023 Jiatao Gu, Alex Trevithick, Kai-En Lin, Josh Susskind, Christian Theobalt, Lingjie Liu, Ravi Ramamoorthi

Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.

Novel View Synthesis

GRF: Learning a General Radiance Field for 3D Representation and Rendering

1 code implementation ICCV 2021 Alex Trevithick, Bo Yang

We present a simple yet powerful neural network that implicitly represents and renders 3D objects and scenes only from 2D observations.

3D Scene Reconstruction Position

GRF: Learning a General Radiance Field for 3D Scene Representation and Rendering

no code implementations28 Sep 2020 Alex Trevithick, Bo Yang

The function models 3D scenes as a general radiance field, which takes a set of 2D images with camera poses and intrinsics as input, constructs an internal representation for each 3D point of the scene, and renders the corresponding appearance and geometry of any 3D point viewing from an arbitrary angle.

Cannot find the paper you are looking for? You can Submit a new open access paper.