Search Results for author: Zhengqin Li

Found 20 papers, 3 papers with code

NeRF Analogies: Example-Based Visual Attribute Transfer for NeRFs

no code implementations13 Feb 2024 Michael Fischer, Zhengqin Li, Thu Nguyen-Phuoc, Aljaz Bozic, Zhao Dong, Carl Marshall, Tobias Ritschel

A Neural Radiance Field (NeRF) encodes the specific relation of 3D geometry and appearance of a scene.

Attribute

ReplaceAnything3D:Text-Guided 3D Scene Editing with Compositional Neural Radiance Fields

no code implementations31 Jan 2024 Edward Bartrum, Thu Nguyen-Phuoc, Chris Xie, Zhengqin Li, Numair Khan, Armen Avetisyan, Douglas Lanman, Lei Xiao

We introduce ReplaceAnything3D model (RAM3D), a novel text-guided 3D scene editing method that enables the replacement of specific objects within a scene.

3D scene Editing Object

IRIS: Inverse Rendering of Indoor Scenes from Low Dynamic Range Images

no code implementations23 Jan 2024 Zhi-Hao Lin, Jia-Bin Huang, Zhengqin Li, Zhao Dong, Christian Richardt, Tuotuo Li, Michael Zollhöfer, Johannes Kopf, Shenlong Wang, Changil Kim

While numerous 3D reconstruction and novel-view synthesis methods allow for photorealistic rendering of a scene from multi-view images easily captured with consumer cameras, they bake illumination in their representations and fall short of supporting advanced applications like material editing, relighting, and virtual object insertion.

3D Reconstruction Inverse Rendering +1

TextureDreamer: Image-guided Texture Synthesis through Geometry-aware Diffusion

no code implementations17 Jan 2024 Yu-Ying Yeh, Jia-Bin Huang, Changil Kim, Lei Xiao, Thu Nguyen-Phuoc, Numair Khan, Cheng Zhang, Manmohan Chandraker, Carl S Marshall, Zhao Dong, Zhengqin Li

In contrast, TextureDreamer can transfer highly detailed, intricate textures from real-world environments to arbitrary objects with only a few casually captured images, potentially significantly democratizing texture creation.

Texture Synthesis

GauFRe: Gaussian Deformation Fields for Real-time Dynamic Novel View Synthesis

no code implementations18 Dec 2023 Yiqing Liang, Numair Khan, Zhengqin Li, Thu Nguyen-Phuoc, Douglas Lanman, James Tompkin, Lei Xiao

We propose a method for dynamic scene reconstruction using deformable 3D Gaussians that is tailored for monocular video.

Novel View Synthesis

Efficient Graphics Representation with Differentiable Indirection

no code implementations12 Sep 2023 Sayantan Datta, Carl Marshall, Derek Nowrouzezahrai, Zhao Dong, Zhengqin Li

We introduce differentiable indirection -- a novel learned primitive that employs differentiable multi-scale lookup tables as an effective substitute for traditional compute and data operations across the graphics pipeline.

Spatiotemporally Consistent HDR Indoor Lighting Estimation

no code implementations7 May 2023 Zhengqin Li, Li Yu, Mikhail Okunev, Manmohan Chandraker, Zhao Dong

For training, we significantly enhance the OpenRooms public dataset of photorealistic synthetic indoor scenes with around 360K HDR environment maps of much higher resolution and 38K video sequences, rendered with GPU-based path tracing.

Lighting Estimation

Neural-PBIR Reconstruction of Shape, Material, and Illumination

no code implementations ICCV 2023 Cheng Sun, Guangyan Cai, Zhengqin Li, Kai Yan, Cheng Zhang, Carl Marshall, Jia-Bin Huang, Shuang Zhao, Zhao Dong

In the last stage, initialized by the neural predictions, we perform PBIR to refine the initial results and obtain the final high-quality reconstruction of object shape, material, and illumination.

Depth Prediction Image Relighting +5

IRISformer: Dense Vision Transformers for Single-Image Inverse Rendering in Indoor Scenes

no code implementations CVPR 2022 Rui Zhu, Zhengqin Li, Janarbek Matai, Fatih Porikli, Manmohan Chandraker

Indoor scenes exhibit significant appearance variations due to myriad interactions between arbitrarily diverse object shapes, spatially-changing materials, and complex lighting.

Inverse Rendering

Physically-Based Editing of Indoor Scene Lighting from a Single Image

no code implementations19 May 2022 Zhengqin Li, Jia Shi, Sai Bi, Rui Zhu, Kalyan Sunkavalli, Miloš Hašan, Zexiang Xu, Ravi Ramamoorthi, Manmohan Chandraker

We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.

Inverse Rendering Lighting Estimation +1

Learning Neural Transmittance for Efficient Rendering of Reflectance Fields

no code implementations25 Oct 2021 Mohammad Shafiei, Sai Bi, Zhengqin Li, Aidas Liaudanskas, Rodrigo Ortiz-Cayon, Ravi Ramamoorthi

However, it remains challenging and time-consuming to render such representations under complex lighting such as environment maps, which requires individual ray marching towards each single light to calculate the transmittance at every sampled point.

OpenRooms: An Open Framework for Photorealistic Indoor Scene Datasets

no code implementations CVPR 2021 Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Meng Song, YuHan Liu, Yu-Ying Yeh, Rui Zhu, Nitesh Gundavarapu, Jia Shi, Sai Bi, Hong-Xing Yu, Zexiang Xu, Kalyan Sunkavalli, Milos Hasan, Ravi Ramamoorthi, Manmohan Chandraker

Finally, we demonstrate that our framework may also be integrated with physics engines, to create virtual robotics environments with unique ground truth such as friction coefficients and correspondence to real scenes.

Friction Inverse Rendering +1

OpenRooms: An End-to-End Open Framework for Photorealistic Indoor Scene Datasets

no code implementations25 Jul 2020 Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Meng Song, YuHan Liu, Yu-Ying Yeh, Rui Zhu, Nitesh Gundavarapu, Jia Shi, Sai Bi, Zexiang Xu, Hong-Xing Yu, Kalyan Sunkavalli, Miloš Hašan, Ravi Ramamoorthi, Manmohan Chandraker

Finally, we demonstrate that our framework may also be integrated with physics engines, to create virtual robotics environments with unique ground truth such as friction coefficients and correspondence to real scenes.

Friction Inverse Rendering +2

Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF from a Single Image

1 code implementation CVPR 2020 Zhengqin Li, Mohammad Shafiei, Ravi Ramamoorthi, Kalyan Sunkavalli, Manmohan Chandraker

Our inverse rendering network incorporates physical insights -- including a spatially-varying spherical Gaussian lighting representation, a differentiable rendering layer to model scene appearance, a cascade structure to iteratively refine the predictions and a bilateral solver for refinement -- allowing us to jointly reason about shape, lighting, and reflectance.

Inverse Rendering

Materials for Masses: SVBRDF Acquisition with a Single Mobile Phone Image

no code implementations ECCV 2018 Zhengqin Li, Kalyan Sunkavalli, Manmohan Chandraker

We propose a material acquisition approach to recover the spatially-varying BRDF and normal map of a near-planar surface from a single image captured by a handheld mobile phone camera.

Robust Energy Minimization for BRDF-Invariant Shape From Light Fields

no code implementations CVPR 2017 Zhengqin Li, Zexiang Xu, Ravi Ramamoorthi, Manmohan Chandraker

On the other hand, recent works have explored PDE invariants for shape recovery with complex BRDFs, but they have not been incorporated into robust numerical optimization frameworks.

Automatic Image Cropping : A Computational Complexity Study

no code implementations CVPR 2016 Jiansheng Chen, Gaocheng Bai, Shaoheng Liang, Zhengqin Li

Attention based automatic image cropping aims at preserving the most visually important region in an image.

Image Cropping

Superpixel Segmentation Using Linear Spectral Clustering

no code implementations CVPR 2015 Zhengqin Li, Jiansheng Chen

We present in this paper a superpixel segmentation algorithm called Linear Spectral Clustering (LSC), which produces compact and uniform superpixels with low computational costs.

Clustering Image Segmentation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.