Search Results for author: Fujun Luan

Found 20 papers, 3 papers with code

DATENeRF: Depth-Aware Text-based Editing of NeRFs

no code implementations6 Apr 2024 Sara Rojas, Julien Philip, Kai Zhang, Sai Bi, Fujun Luan, Bernard Ghanem, Kalyan Sunkavall

However, extending these techniques to edit scenes in Neural Radiance Fields (NeRF) is complex, as editing individual 2D frames can result in inconsistencies across multiple views.

Relightable Neural Assets

no code implementations14 Dec 2023 Krishna Mullia, Fujun Luan, Xin Sun, Miloš Hašan

We combine an MLP decoder with a feature grid.

PF-LRM: Pose-Free Large Reconstruction Model for Joint Pose and Shape Prediction

no code implementations20 Nov 2023 Peng Wang, Hao Tan, Sai Bi, Yinghao Xu, Fujun Luan, Kalyan Sunkavalli, Wenping Wang, Zexiang Xu, Kai Zhang

We propose a Pose-Free Large Reconstruction Model (PF-LRM) for reconstructing a 3D object from a few unposed images even with little visual overlap, while simultaneously estimating the relative camera poses in ~1. 3 seconds on a single A100 GPU.

3D Reconstruction Image to 3D +1

DMV3D: Denoising Multi-View Diffusion using 3D Large Reconstruction Model

no code implementations15 Nov 2023 Yinghao Xu, Hao Tan, Fujun Luan, Sai Bi, Peng Wang, Jiahao Li, Zifan Shi, Kalyan Sunkavalli, Gordon Wetzstein, Zexiang Xu, Kai Zhang

We propose \textbf{DMV3D}, a novel 3D generation approach that uses a transformer-based 3D large reconstruction model to denoise multi-view diffusion.

3D Generation Denoising +2

Controllable Dynamic Appearance for Neural 3D Portraits

no code implementations20 Sep 2023 ShahRukh Athar, Zhixin Shu, Zexiang Xu, Fujun Luan, Sai Bi, Kalyan Sunkavalli, Dimitris Samaras

The surface normals prediction is guided using 3DMM normals that act as a coarse prior for the normals of the human head, where direct prediction of normals is hard due to rigid and non-rigid deformations induced by head-pose and facial expression changes.

PSDR-Room: Single Photo to Scene using Differentiable Rendering

no code implementations6 Jul 2023 Kai Yan, Fujun Luan, Miloš Hašan, Thibault Groueix, Valentin Deschaintre, Shuang Zhao

A 3D digital scene contains many components: lights, materials and geometries, interacting to reach the desired appearance.

Scene Understanding

I$^2$-SDF: Intrinsic Indoor Scene Reconstruction and Editing via Raytracing in Neural SDFs

no code implementations14 Mar 2023 Jingsen Zhu, Yuchi Huo, Qi Ye, Fujun Luan, Jifan Li, Dianbing Xi, Lisha Wang, Rui Tang, Wei Hua, Hujun Bao, Rui Wang

In this work, we present I$^2$-SDF, a new method for intrinsic indoor scene reconstruction and editing using differentiable Monte Carlo raytracing on neural signed distance fields (SDFs).

Indoor Scene Reconstruction Novel View Synthesis

I2-SDF: Intrinsic Indoor Scene Reconstruction and Editing via Raytracing in Neural SDFs

no code implementations CVPR 2023 Jingsen Zhu, Yuchi Huo, Qi Ye, Fujun Luan, Jifan Li, Dianbing Xi, Lisha Wang, Rui Tang, Wei Hua, Hujun Bao, Rui Wang

Further, we propose to decompose the neural radiance field into spatially-varying material of the scene as a neural field through surface-based, differentiable Monte Carlo raytracing and emitter semantic segmentations, which enables physically based and photorealistic scene relighting and editing applications.

Indoor Scene Reconstruction Novel View Synthesis

Learning-based Inverse Rendering of Complex Indoor Scenes with Differentiable Monte Carlo Raytracing

no code implementations6 Nov 2022 Jingsen Zhu, Fujun Luan, Yuchi Huo, Zihao Lin, Zhihua Zhong, Dianbing Xi, Jiaxiang Zheng, Rui Tang, Hujun Bao, Rui Wang

Indoor scenes typically exhibit complex, spatially-varying appearance from global illumination, making inverse rendering a challenging ill-posed problem.

Inverse Rendering

ARF: Artistic Radiance Fields

1 code implementation13 Jun 2022 Kai Zhang, Nick Kolkin, Sai Bi, Fujun Luan, Zexiang Xu, Eli Shechtman, Noah Snavely

We present a method for transferring the artistic features of an arbitrary style image to a 3D scene.

Differentiable Rendering of Neural SDFs through Reparameterization

no code implementations10 Jun 2022 Sai Praveen Bangaru, Michaël Gharbi, Tzu-Mao Li, Fujun Luan, Kalyan Sunkavalli, Miloš Hašan, Sai Bi, Zexiang Xu, Gilbert Bernstein, Frédo Durand

Our method leverages the distance to surface encoded in an SDF and uses quadrature on sphere tracer points to compute this warping function.

Inverse Rendering

IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from Photometric Images

no code implementations CVPR 2022 Kai Zhang, Fujun Luan, Zhengqi Li, Noah Snavely

We propose a neural inverse rendering pipeline called IRON that operates on photometric images and outputs high-quality 3D content in the format of triangle meshes and material textures readily deployable in existing graphics pipelines.

Disentanglement Inverse Rendering

PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting

no code implementations CVPR 2021 Kai Zhang, Fujun Luan, Qianqian Wang, Kavita Bala, Noah Snavely

We present PhySG, an end-to-end inverse rendering pipeline that includes a fully differentiable renderer and can reconstruct geometry, materials, and illumination from scratch from a set of RGB input images.

Depth Prediction Image Relighting +3

Unified Shape and SVBRDF Recovery using Differentiable Monte Carlo Rendering

no code implementations28 Mar 2021 Fujun Luan, Shuang Zhao, Kavita Bala, Zhao Dong

Reconstructing the shape and appearance of real-world objects using measured 2D images has been a long-standing problem in computer vision.

Inverse Transport Networks

no code implementations28 Sep 2018 Chengqian Che, Fujun Luan, Shuang Zhao, Kavita Bala, Ioannis Gkioulekas

We introduce inverse transport networks as a learning architecture for inverse rendering problems where, given input image measurements, we seek to infer physical scene parameters such as shape, material, and illumination.

Inverse Rendering

Deep Painterly Harmonization

12 code implementations9 Apr 2018 Fujun Luan, Sylvain Paris, Eli Shechtman, Kavita Bala

Copying an element from a photo and pasting it into a painting is a challenging task.

Graphics

Deep Photo Style Transfer

21 code implementations CVPR 2017 Fujun Luan, Sylvain Paris, Eli Shechtman, Kavita Bala

This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style.

Style Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.