Search Results for author: Thu Nguyen-Phuoc

Found 10 papers, 3 papers with code

NeRF Analogies: Example-Based Visual Attribute Transfer for NeRFs

no code implementations13 Feb 2024 Michael Fischer, Zhengqin Li, Thu Nguyen-Phuoc, Aljaz Bozic, Zhao Dong, Carl Marshall, Tobias Ritschel

A Neural Radiance Field (NeRF) encodes the specific relation of 3D geometry and appearance of a scene.

Attribute

ReplaceAnything3D:Text-Guided 3D Scene Editing with Compositional Neural Radiance Fields

no code implementations31 Jan 2024 Edward Bartrum, Thu Nguyen-Phuoc, Chris Xie, Zhengqin Li, Numair Khan, Armen Avetisyan, Douglas Lanman, Lei Xiao

We introduce ReplaceAnything3D model (RAM3D), a novel text-guided 3D scene editing method that enables the replacement of specific objects within a scene.

3D scene Editing Object

TextureDreamer: Image-guided Texture Synthesis through Geometry-aware Diffusion

no code implementations17 Jan 2024 Yu-Ying Yeh, Jia-Bin Huang, Changil Kim, Lei Xiao, Thu Nguyen-Phuoc, Numair Khan, Cheng Zhang, Manmohan Chandraker, Carl S Marshall, Zhao Dong, Zhengqin Li

In contrast, TextureDreamer can transfer highly detailed, intricate textures from real-world environments to arbitrary objects with only a few casually captured images, potentially significantly democratizing texture creation.

Texture Synthesis

GauFRe: Gaussian Deformation Fields for Real-time Dynamic Novel View Synthesis

no code implementations18 Dec 2023 Yiqing Liang, Numair Khan, Zhengqin Li, Thu Nguyen-Phuoc, Douglas Lanman, James Tompkin, Lei Xiao

We propose a method for dynamic scene reconstruction using deformable 3D Gaussians that is tailored for monocular video.

Novel View Synthesis

AlteredAvatar: Stylizing Dynamic 3D Avatars with Fast Style Adaptation

no code implementations30 May 2023 Thu Nguyen-Phuoc, Gabriel Schwartz, Yuting Ye, Stephen Lombardi, Lei Xiao

Among existing approaches for avatar stylization, direct optimization methods can produce excellent results for arbitrary styles but they are unpleasantly slow.

Meta-Learning

BlockGAN: Learning 3D Object-aware Scene Representations from Unlabelled Images

1 code implementation NeurIPS 2020 Thu Nguyen-Phuoc, Christian Richardt, Long Mai, Yong-Liang Yang, Niloy Mitra

Our experiments show that using explicit 3D features to represent objects allows BlockGAN to learn disentangled representations both in terms of objects (foreground and background) and their properties (pose and identity).

Object Representation Learning

RenderNet: A deep convolutional network for differentiable rendering from 3D shapes

1 code implementation NeurIPS 2018 Thu Nguyen-Phuoc, Chuan Li, Stephen Balaban, Yong-Liang Yang

We present RenderNet, a differentiable rendering convolutional network with a novel projection unit that can render 2D images from 3D shapes.

Inverse Rendering

Cannot find the paper you are looking for? You can Submit a new open access paper.