Search Results for author: Anpei Chen

Found 20 papers, 13 papers with code

2D Gaussian Splatting for Geometrically Accurate Radiance Fields

no code implementations26 Mar 2024 Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, Shenghua Gao

3D Gaussian Splatting (3DGS) has recently revolutionized radiance field reconstruction, achieving high quality novel view synthesis and fast rendering speed without baking.

Novel View Synthesis

NeLF-Pro: Neural Light Field Probes

no code implementations20 Dec 2023 Zinuo You, Andreas Geiger, Anpei Chen

We present NeLF-Pro, a novel representation for modeling and reconstructing light fields in diverse natural scenes that vary in extend and spatial granularity.

MuRF: Multi-Baseline Radiance Fields

1 code implementation7 Dec 2023 Haofei Xu, Anpei Chen, Yuedong Chen, Christos Sakaridis, Yulun Zhang, Marc Pollefeys, Andreas Geiger, Fisher Yu

We present Multi-Baseline Radiance Fields (MuRF), a general feed-forward approach to solving sparse view synthesis under multiple different baseline settings (small and large baselines, and different number of input views).

Zero-shot Generalization

GraphDreamer: Compositional 3D Scene Synthesis from Scene Graphs

no code implementations30 Nov 2023 Gege Gao, Weiyang Liu, Anpei Chen, Andreas Geiger, Bernhard Schölkopf

As pretrained text-to-image diffusion models become increasingly powerful, recent efforts have been made to distill knowledge from these text-to-image pretrained models for optimizing a text-guided 3D model.

Mip-Splatting: Alias-free 3D Gaussian Splatting

1 code implementation27 Nov 2023 Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, Andreas Geiger

Recently, 3D Gaussian Splatting has demonstrated impressive novel view synthesis results, reaching high fidelity and efficiency.

Novel View Synthesis

Factor Fields: A Unified Framework for Neural Fields and Beyond

1 code implementation2 Feb 2023 Anpei Chen, Zexiang Xu, Xinyue Wei, Siyu Tang, Hao Su, Andreas Geiger

Our experiments show that DiF leads to improvements in approximation quality, compactness, and training time when compared to previous fast reconstruction methods.

regression

PREF: Phasorial Embedding Fields for Compact Neural Representations

1 code implementation26 May 2022 Binbin Huang, Xinhao Yan, Anpei Chen, Shenghua Gao, Jingyi Yu

We present an efficient frequency-based neural representation termed PREF: a shallow MLP augmented with a phasor volume that covers significant border spectra than previous Fourier feature mapping or Positional Encoding.

TensoRF: Tensorial Radiance Fields

2 code implementations17 Mar 2022 Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, Hao Su

We demonstrate that applying traditional CP decomposition -- that factorizes tensors into rank-one components with compact vectors -- in our framework leads to improvements over vanilla NeRF.

Low-Dose X-Ray Ct Reconstruction Novel View Synthesis

Convolutional Neural Opacity Radiance Fields

1 code implementation5 Apr 2021 Haimin Luo, Anpei Chen, Qixuan Zhang, Bai Pang, Minye Wu, Lan Xu, Jingyi Yu

In this paper, we propose a novel scheme to generate opacity radiance fields with a convolutional neural renderer for fuzzy objects, which is the first to combine both explicit opacity supervision and convolutional mechanism into the neural radiance field framework so as to enable high-quality appearance and global consistent alpha mattes generation in arbitrary novel views.

MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo

2 code implementations ICCV 2021 Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, Hao Su

We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.

Neural Rendering

GNeRF: GAN-based Neural Radiance Field without Posed Camera

1 code implementation ICCV 2021 Quan Meng, Anpei Chen, Haimin Luo, Minye Wu, Hao Su, Lan Xu, Xuming He, Jingyi Yu

We introduce GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field (NeRF) reconstruction for the complex scenarios with unknown and even randomly initialized camera poses.

Novel View Synthesis

SofGAN: A Portrait Image Generator with Dynamic Styling

1 code implementation7 Jul 2020 Anpei Chen, Ruiyang Liu, Ling Xie, Zhang Chen, Hao Su, Jingyi Yu

To address this issue, we propose a SofGAN image generator to decouple the latent space of portraits into two subspaces: a geometry space and a texture space.

2D Semantic Segmentation Image Generation +1

A Neural Rendering Framework for Free-Viewpoint Relighting

2 code implementations CVPR 2020 Zhang Chen, Anpei Chen, Guli Zhang, Chengyuan Wang, Yu Ji, Kiriakos N. Kutulakos, Jingyi Yu

We present a novel Relightable Neural Renderer (RNR) for simultaneous view synthesis and relighting using multi-view image inputs.

Neural Rendering Novel View Synthesis

Learning Semantics-aware Distance Map with Semantics Layering Network for Amodal Instance Segmentation

1 code implementation30 May 2019 Ziheng Zhang, Anpei Chen, Ling Xie, Jingyi Yu, Shenghua Gao

Specifically, we first introduce a new representation, namely a semantics-aware distance map (sem-dist map), to serve as our target for amodal segmentation instead of the commonly used masks and heatmaps.

Amodal Instance Segmentation Segmentation +1

Photo-Realistic Facial Details Synthesis from Single Image

1 code implementation ICCV 2019 Anpei Chen, Zhang Chen, Guli Zhang, Ziheng Zhang, Kenny Mitchell, Jingyi Yu

Our technique employs expression analysis for proxy face geometry generation and combines supervised and unsupervised learning for facial detail synthesis.

Face Generation

Deep Surface Light Fields

no code implementations15 Oct 2018 Anpei Chen, Minye Wu, Yingliang Zhang, Nianyi Li, Jie Lu, Shenghua Gao, Jingyi Yu

A surface light field represents the radiance of rays originating from any points on the surface in any directions.

Data Compression Image Registration

Sparse Photometric 3D Face Reconstruction Guided by Morphable Models

no code implementations CVPR 2018 Xuan Cao, Zhang Chen, Anpei Chen, Xin Chen, Cen Wang, Jingyi Yu

We present a novel 3D face reconstruction technique that leverages sparse photometric stereo (PS) and latest advances on face registration/modeling from a single image.

3D Face Reconstruction Position +1

Cannot find the paper you are looking for? You can Submit a new open access paper.