no code implementations • 13 Feb 2024 • Michael Fischer, Zhengqin Li, Thu Nguyen-Phuoc, Aljaz Bozic, Zhao Dong, Carl Marshall, Tobias Ritschel
A Neural Radiance Field (NeRF) encodes the specific relation of 3D geometry and appearance of a scene.
no code implementations • 31 Jan 2024 • Edward Bartrum, Thu Nguyen-Phuoc, Chris Xie, Zhengqin Li, Numair Khan, Armen Avetisyan, Douglas Lanman, Lei Xiao
We introduce ReplaceAnything3D model (RAM3D), a novel text-guided 3D scene editing method that enables the replacement of specific objects within a scene.
no code implementations • 17 Jan 2024 • Yu-Ying Yeh, Jia-Bin Huang, Changil Kim, Lei Xiao, Thu Nguyen-Phuoc, Numair Khan, Cheng Zhang, Manmohan Chandraker, Carl S Marshall, Zhao Dong, Zhengqin Li
In contrast, TextureDreamer can transfer highly detailed, intricate textures from real-world environments to arbitrary objects with only a few casually captured images, potentially significantly democratizing texture creation.
no code implementations • 18 Dec 2023 • Yiqing Liang, Numair Khan, Zhengqin Li, Thu Nguyen-Phuoc, Douglas Lanman, James Tompkin, Lei Xiao
We propose a method for dynamic scene reconstruction using deformable 3D Gaussians that is tailored for monocular video.
no code implementations • 30 May 2023 • Thu Nguyen-Phuoc, Gabriel Schwartz, Yuting Ye, Stephen Lombardi, Lei Xiao
Among existing approaches for avatar stylization, direct optimization methods can produce excellent results for arbitrary styles but they are unpleasantly slow.
no code implementations • 5 Jul 2022 • Thu Nguyen-Phuoc, Feng Liu, Lei Xiao
This paper presents a stylized novel view synthesis method.
no code implementations • ICCV 2021 • Siva Karthik Mustikovela, Shalini De Mello, Aayush Prakash, Umar Iqbal, Sifei Liu, Thu Nguyen-Phuoc, Carsten Rother, Jan Kautz
We present SSOD, the first end-to-end analysis-by synthesis framework with controllable GANs for the task of self-supervised object detection.
1 code implementation • NeurIPS 2020 • Thu Nguyen-Phuoc, Christian Richardt, Long Mai, Yong-Liang Yang, Niloy Mitra
Our experiments show that using explicit 3D features to represent objects allows BlockGAN to learn disentangled representations both in terms of objects (foreground and background) and their properties (pose and identity).
3 code implementations • ICCV 2019 • Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, Yong-Liang Yang
This shows that HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner.
1 code implementation • NeurIPS 2018 • Thu Nguyen-Phuoc, Chuan Li, Stephen Balaban, Yong-Liang Yang
We present RenderNet, a differentiable rendering convolutional network with a novel projection unit that can render 2D images from 3D shapes.