Search Results for author: Zexiang Xu

Found 34 papers, 12 papers with code

Controllable Dynamic Appearance for Neural 3D Portraits

no code implementations20 Sep 2023 ShahRukh Athar, Zhixin Shu, Zexiang Xu, Fujun Luan, Sai Bi, Kalyan Sunkavalli, Dimitris Samaras

The surface normals prediction is guided using 3DMM normals that act as a coarse prior for the normals of the human head, where direct prediction of normals is hard due to rigid and non-rigid deformations induced by head-pose and facial expression changes.

OpenIllumination: A Multi-Illumination Dataset for Inverse Rendering Evaluation on Real Objects

no code implementations14 Sep 2023 Isabella Liu, Linghao Chen, Ziyang Fu, Liwen Wu, Haian Jin, Zhong Li, Chin Ming Ryan Wong, Yi Xu, Ravi Ramamoorthi, Zexiang Xu, Hao Su

We introduce OpenIllumination, a real-world dataset containing over 108K images of 64 objects with diverse materials, captured under 72 camera views and a large number of different illuminations.

Foreground Segmentation Inverse Rendering

Strivec: Sparse Tri-Vector Radiance Fields

1 code implementation ICCV 2023 Quankai Gao, Qiangeng Xu, Hao Su, Ulrich Neumann, Zexiang Xu

In contrast to TensoRF which uses a global tensor and focuses on their vector-matrix decomposition, we propose to utilize a cloud of local tensors and apply the classic CANDECOMP/PARAFAC (CP) decomposition to factorize each tensor into triple vectors that express local feature distributions along spatial axes and compactly encode a local neural field.

Tensor Decomposition

Neural Free-Viewpoint Relighting for Glossy Indirect Illumination

no code implementations12 Jul 2023 Nithin Raghavan, Yan Xiao, Kai-En Lin, Tiancheng Sun, Sai Bi, Zexiang Xu, Tzu-Mao Li, Ravi Ramamoorthi

In this paper, we demonstrate a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view.

Tensor Decomposition

One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization

1 code implementation29 Jun 2023 Minghua Liu, Chao Xu, Haian Jin, Linghao Chen, Mukund Varma T, Zexiang Xu, Hao Su

Single image 3D reconstruction is an important but challenging task that requires extensive knowledge of our natural world.

3D Reconstruction Image to 3D +2

MovingParts: Motion-based 3D Part Discovery in Dynamic Radiance Field

no code implementations10 Mar 2023 Kaizhi Yang, Xiaoshuai Zhang, Zhiao Huang, Xuejin Chen, Zexiang Xu, Hao Su

Under the Lagrangian view, we parameterize the scene motion by tracking the trajectory of particles on objects.

Factor Fields: A Unified Framework for Neural Fields and Beyond

no code implementations2 Feb 2023 Anpei Chen, Zexiang Xu, Xinyue Wei, Siyu Tang, Hao Su, Andreas Geiger

Our experiments show that DiF leads to improvements in approximation quality, compactness, and training time when compared to previous fast reconstruction methods.


RigNeRF: Fully Controllable Neural 3D Portraits

no code implementations CVPR 2022 ShahRukh Athar, Zexiang Xu, Kalyan Sunkavalli, Eli Shechtman, Zhixin Shu

In this work, we propose RigNeRF, a system that goes beyond just novel view synthesis and enables full control of head pose and facial expressions learned from a single portrait video.

Face Model Neural Rendering +1

ARF: Artistic Radiance Fields

1 code implementation13 Jun 2022 Kai Zhang, Nick Kolkin, Sai Bi, Fujun Luan, Zexiang Xu, Eli Shechtman, Noah Snavely

We present a method for transferring the artistic features of an arbitrary style image to a 3D scene.

Differentiable Rendering of Neural SDFs through Reparameterization

no code implementations10 Jun 2022 Sai Praveen Bangaru, Michaël Gharbi, Tzu-Mao Li, Fujun Luan, Kalyan Sunkavalli, Miloš Hašan, Sai Bi, Zexiang Xu, Gilbert Bernstein, Frédo Durand

Our method leverages the distance to surface encoded in an SDF and uses quadrature on sphere tracer points to compute this warping function.

Inverse Rendering

Physically-Based Editing of Indoor Scene Lighting from a Single Image

no code implementations19 May 2022 Zhengqin Li, Jia Shi, Sai Bi, Rui Zhu, Kalyan Sunkavalli, Miloš Hašan, Zexiang Xu, Ravi Ramamoorthi, Manmohan Chandraker

We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.

Inverse Rendering Lighting Estimation +1

NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction

1 code implementation CVPR 2022 Xiaoshuai Zhang, Sai Bi, Kalyan Sunkavalli, Hao Su, Zexiang Xu

We demonstrate that NeRFusion achieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods.

3D Reconstruction

TensoRF: Tensorial Radiance Fields

2 code implementations17 Mar 2022 Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, Hao Su

We demonstrate that applying traditional CP decomposition -- that factorizes tensors into rank-one components with compact vectors -- in our framework leads to improvements over vanilla NeRF.

Point-NeRF: Point-based Neural Radiance Fields

1 code implementation CVPR 2022 Qiangeng Xu, Zexiang Xu, Julien Philip, Sai Bi, Zhixin Shu, Kalyan Sunkavalli, Ulrich Neumann

Point-NeRF combines the advantages of these two approaches by using neural 3D point clouds, with associated neural features, to model a radiance field.

3D Reconstruction Neural Rendering

NeLF: Neural Light-transport Field for Portrait View Synthesis and Relighting

no code implementations26 Jul 2021 Tiancheng Sun, Kai-En Lin, Sai Bi, Zexiang Xu, Ravi Ramamoorthi

Our system is trained on a large number of synthetic models, and can generalize to different synthetic and real portraits under various lighting conditions.

OpenRooms: An Open Framework for Photorealistic Indoor Scene Datasets

no code implementations CVPR 2021 Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Meng Song, YuHan Liu, Yu-Ying Yeh, Rui Zhu, Nitesh Gundavarapu, Jia Shi, Sai Bi, Hong-Xing Yu, Zexiang Xu, Kalyan Sunkavalli, Milos Hasan, Ravi Ramamoorthi, Manmohan Chandraker

Finally, we demonstrate that our framework may also be integrated with physics engines, to create virtual robotics environments with unique ground truth such as friction coefficients and correspondence to real scenes.

Friction Inverse Rendering +1

NeuMIP: Multi-Resolution Neural Materials

no code implementations6 Apr 2021 Alexandr Kuznetsov, Krishna Mullia, Zexiang Xu, Miloš Hašan, Ravi Ramamoorthi

We also introduce neural offsets, a novel method which allows rendering materials with intricate parallax effects without any tessellation.

MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo

2 code implementations ICCV 2021 Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, Hao Su

We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.

Neural Rendering

NeuTex: Neural Texture Mapping for Volumetric Neural Rendering

1 code implementation CVPR 2021 Fanbo Xiang, Zexiang Xu, Miloš Hašan, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Hao Su

We achieve this by introducing a 3D-to-2D texture mapping (or surface parameterization) network into volumetric representations.

Neural Rendering

Photon-Driven Neural Path Guiding

no code implementations5 Oct 2020 Shilin Zhu, Zexiang Xu, Tiancheng Sun, Alexandr Kuznetsov, Mark Meyer, Henrik Wann Jensen, Hao Su, Ravi Ramamoorthi

To fully make use of our deep neural network, we partition the scene space into an adaptive hierarchical grid, in which we apply our network to reconstruct high-quality sampling distributions for any local region in the scene.

Neural Reflectance Fields for Appearance Acquisition

no code implementations9 Aug 2020 Sai Bi, Zexiang Xu, Pratul Srinivasan, Ben Mildenhall, Kalyan Sunkavalli, Miloš Hašan, Yannick Hold-Geoffroy, David Kriegman, Ravi Ramamoorthi

We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.

OpenRooms: An End-to-End Open Framework for Photorealistic Indoor Scene Datasets

no code implementations25 Jul 2020 Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Meng Song, YuHan Liu, Yu-Ying Yeh, Rui Zhu, Nitesh Gundavarapu, Jia Shi, Sai Bi, Zexiang Xu, Hong-Xing Yu, Kalyan Sunkavalli, Miloš Hašan, Ravi Ramamoorthi, Manmohan Chandraker

Finally, we demonstrate that our framework may also be integrated with physics engines, to create virtual robotics environments with unique ground truth such as friction coefficients and correspondence to real scenes.

Friction Inverse Rendering +2

Deep Photon Mapping

no code implementations25 Apr 2020 Shilin Zhu, Zexiang Xu, Henrik Wann Jensen, Hao Su, Ravi Ramamoorthi

This network is easy to incorporate in many previous photon mapping methods (by simply swapping the kernel density estimator) and can produce high-quality reconstructions of complex global illumination effects like caustics with an order of magnitude fewer photons compared to previous photon mapping methods.

Denoising Density Estimation

Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images

no code implementations CVPR 2020 Sai Bi, Zexiang Xu, Kalyan Sunkavalli, David Kriegman, Ravi Ramamoorthi

We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object from a sparse set of only six images captured by wide-baseline cameras under collocated point lighting.

Deep Stereo using Adaptive Thin Volume Representation with Uncertainty Awareness

1 code implementation CVPR 2020 Shuo Cheng, Zexiang Xu, Shilin Zhu, Zhuwen Li, Li Erran Li, Ravi Ramamoorthi, Hao Su

In contrast, we propose adaptive thin volumes (ATVs); in an ATV, the depth hypothesis of each plane is spatially varying, which adapts to the uncertainties of previous per-pixel depth predictions.

3D Reconstruction Point Clouds

Robust Energy Minimization for BRDF-Invariant Shape From Light Fields

no code implementations CVPR 2017 Zhengqin Li, Zexiang Xu, Ravi Ramamoorthi, Manmohan Chandraker

On the other hand, recent works have explored PDE invariants for shape recovery with complex BRDFs, but they have not been incorporated into robust numerical optimization frameworks.

Cannot find the paper you are looking for? You can Submit a new open access paper.