Search Results for author: Hongzhi Wu

Found 11 papers, 1 papers with code

DiLightNet: Fine-grained Lighting Control for Diffusion-based Image Generation

no code implementations19 Feb 2024 Chong Zeng, Yue Dong, Pieter Peers, Youkang Kong, Hongzhi Wu, Xin Tong

To provide the content creator with fine-grained control over the lighting during image generation, we augment the text-prompt with detailed lighting information in the form of radiance hints, i. e., visualizations of the scene geometry with a homogeneous canonical material under the target lighting.

Image Generation

Gaussian Splashing: Dynamic Fluid Synthesis with Gaussian Splatting

no code implementations27 Jan 2024 Yutao Feng, Xiang Feng, Yintong Shang, Ying Jiang, Chang Yu, Zeshun Zong, Tianjia Shao, Hongzhi Wu, Kun Zhou, Chenfanfu Jiang, Yin Yang

We demonstrate the feasibility of integrating physics-based animations of solids and fluids with 3D Gaussian Splatting (3DGS) to create novel effects in virtual scenes reconstructed using 3DGS.

A Real-time Method for Inserting Virtual Objects into Neural Radiance Fields

no code implementations9 Oct 2023 Keyang Ye, Hongzhi Wu, Xin Tong, Kun Zhou

We present the first real-time method for inserting a rigid virtual object into a neural radiance field, which produces realistic lighting and shadowing effects, as well as allows interactive manipulation of the object.

Lighting Estimation Object

Relighting Neural Radiance Fields with Shadow and Highlight Hints

1 code implementation25 Aug 2023 Chong Zeng, Guojun Chen, Yue Dong, Pieter Peers, Hongzhi Wu, Xin Tong

This paper presents a novel neural implicit radiance representation for free viewpoint relighting from a small set of unstructured photographs of an object lit by a moving point light source different from the view position.

Position

Learning Photometric Feature Transform for Free-form Object Scan

no code implementations7 Aug 2023 Xiang Feng, Kaizhang Kang, Fan Pei, Huakeng Ding, Jinjiang You, Ping Tan, Kun Zhou, Hongzhi Wu

We propose a novel framework to automatically learn to aggregate and transform photometric measurements from multiple unstructured views into spatially distinctive and view-invariant low-level features, which are fed to a multi-view stereo method to enhance 3D reconstruction.

3D Reconstruction Object

A Unified Spatial-Angular Structured Light for Single-View Acquisition of Shape and Reflectance

no code implementations CVPR 2023 Xianmin Xu, Yuxin Lin, Haoyang Zhou, Chong Zeng, Yaxin Yu, Kun Zhou, Hongzhi Wu

We propose a unified structured light, consisting of an LED array and an LCD mask, for high-quality acquisition of both shape and reflectance from a single view.

Efficient Reflectance Capture with a Deep Gated Mixture-of-Experts

no code implementations29 Mar 2022 Xiaohe Ma, Yaxin Yu, Hongzhi Wu, Kun Zhou

A common, pre-trained latent transform module is also appended to each decoder, to offset the burden of the increased number of decoders.

DiFT: Differentiable Differential Feature Transform for Multi-View Stereo

no code implementations16 Mar 2022 Kaizhang Kang, Chong Zeng, Hongzhi Wu, Kun Zhou

We present a novel framework to automatically learn to transform the differential cues from a stack of images densely captured with a rotational motion into spatially discriminative and view-invariant per-pixel features at each view.

3D Reconstruction

Learning Implicit Body Representations from Double Diffusion Based Neural Radiance Fields

no code implementations23 Dec 2021 Guangming Yao, Hongzhi Wu, Yi Yuan, Lincheng Li, Kun Zhou, Xin Yu

In this paper, we present a novel double diffusion based neural radiance field, dubbed DD-NeRF, to reconstruct human body geometry and render the human body appearance in novel views from a sparse set of images.

Novel View Synthesis

Learning Efficient Photometric Feature Transform for Multi-view Stereo

no code implementations ICCV 2021 Kaizhang Kang, Cihui Xie, Ruisheng Zhu, Xiaohe Ma, Ping Tan, Hongzhi Wu, Kun Zhou

We present a novel framework to learn to convert the perpixel photometric information at each view into spatially distinctive and view-invariant low-level features, which can be plugged into existing multi-view stereo pipeline for enhanced 3D reconstruction.

3D Reconstruction

Intrinsic Light Field Images

no code implementations15 Aug 2016 Elena Garces, Jose I. Echevarria, Wen Zhang, Hongzhi Wu, Kun Zhou, Diego Gutierrez

We present a method to automatically decompose a light field into its intrinsic shading and albedo components.

Cannot find the paper you are looking for? You can Submit a new open access paper.