no code implementations • 4 Sep 2024 • Stefano Esposito, Anpei Chen, Christian Reiser, Samuel Rota Bulò, Lorenzo Porzi, Katja Schwarz, Christian Richardt, Michael Zollhöfer, Peter Kontschieder, Andreas Geiger
High-quality real-time view synthesis methods are based on volume rendering, splatting, or surface rendering.
no code implementations • 5 Jul 2024 • Anpei Chen, Haofei Xu, Stefano Esposito, Siyu Tang, Andreas Geiger
Radiance field methods have achieved photorealistic novel view synthesis and geometry reconstruction.
1 code implementation • 26 Mar 2024 • Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, Shenghua Gao
3D Gaussian Splatting (3DGS) has recently revolutionized radiance field reconstruction, achieving high quality novel view synthesis and fast rendering speed without baking.
no code implementations • CVPR 2024 • Zinuo You, Andreas Geiger, Anpei Chen
In contrast to previous fast reconstruction methods that represent the 3D scene globally, we model the light field of a scene as a set of local light field feature probes, parameterized with position and multi-channel 2D feature maps.
1 code implementation • CVPR 2024 • Haofei Xu, Anpei Chen, Yuedong Chen, Christos Sakaridis, Yulun Zhang, Marc Pollefeys, Andreas Geiger, Fisher Yu
We present Multi-Baseline Radiance Fields (MuRF), a general feed-forward approach to solving sparse view synthesis under multiple different baseline settings (small and large baselines, and different number of input views).
no code implementations • CVPR 2024 • Gege Gao, Weiyang Liu, Anpei Chen, Andreas Geiger, Bernhard Schölkopf
As pretrained text-to-image diffusion models become increasingly powerful, recent efforts have been made to distill knowledge from these text-to-image pretrained models for optimizing a text-guided 3D model.
1 code implementation • CVPR 2024 • Zehao Yu, Anpei Chen, Binbin Huang, Torsten Sattler, Andreas Geiger
Recently, 3D Gaussian Splatting has demonstrated impressive novel view synthesis results, reaching high fidelity and efficiency.
no code implementations • 26 May 2023 • Xinyue Wei, Fanbo Xiang, Sai Bi, Anpei Chen, Kalyan Sunkavalli, Zexiang Xu, Hao Su
We present a method for generating high-quality watertight manifold meshes from multi-view input images.
1 code implementation • ICCV 2023 • Hansheng Chen, Jiatao Gu, Anpei Chen, Wei Tian, Zhuowen Tu, Lingjie Liu, Hao Su
3D-aware image synthesis encompasses a variety of tasks, such as scene generation and novel view synthesis from images.
1 code implementation • 2 Feb 2023 • Anpei Chen, Zexiang Xu, Xinyue Wei, Siyu Tang, Hao Su, Andreas Geiger
Our experiments show that DiF leads to improvements in approximation quality, compactness, and training time when compared to previous fast reconstruction methods.
no code implementations • 28 Oct 2022 • Liangchen Song, Anpei Chen, Zhong Li, Zhang Chen, Lele Chen, Junsong Yuan, Yi Xu, Andreas Geiger
Visually exploring in a real-world 4D spatiotemporal space freely in VR has been a long-term quest.
1 code implementation • 26 May 2022 • Binbin Huang, Xinhao Yan, Anpei Chen, Shenghua Gao, Jingyi Yu
We present an efficient frequency-based neural representation termed PREF: a shallow MLP augmented with a phasor volume that covers significant border spectra than previous Fourier feature mapping or Positional Encoding.
2 code implementations • 17 Mar 2022 • Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, Hao Su
We demonstrate that applying traditional CP decomposition -- that factorizes tensors into rank-one components with compact vectors -- in our framework leads to improvements over vanilla NeRF.
Ranked #3 on Novel View Synthesis on X3D
1 code implementation • 5 Apr 2021 • Haimin Luo, Anpei Chen, Qixuan Zhang, Bai Pang, Minye Wu, Lan Xu, Jingyi Yu
In this paper, we propose a novel scheme to generate opacity radiance fields with a convolutional neural renderer for fuzzy objects, which is the first to combine both explicit opacity supervision and convolutional mechanism into the neural radiance field framework so as to enable high-quality appearance and global consistent alpha mattes generation in arbitrary novel views.
2 code implementations • ICCV 2021 • Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, Hao Su
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
1 code implementation • ICCV 2021 • Quan Meng, Anpei Chen, Haimin Luo, Minye Wu, Hao Su, Lan Xu, Xuming He, Jingyi Yu
We introduce GNeRF, a framework to marry Generative Adversarial Networks (GAN) with Neural Radiance Field (NeRF) reconstruction for the complex scenarios with unknown and even randomly initialized camera poses.
1 code implementation • 7 Jul 2020 • Anpei Chen, Ruiyang Liu, Ling Xie, Zhang Chen, Hao Su, Jingyi Yu
To address this issue, we propose a SofGAN image generator to decouple the latent space of portraits into two subspaces: a geometry space and a texture space.
2 code implementations • CVPR 2020 • Zhang Chen, Anpei Chen, Guli Zhang, Chengyuan Wang, Yu Ji, Kiriakos N. Kutulakos, Jingyi Yu
We present a novel Relightable Neural Renderer (RNR) for simultaneous view synthesis and relighting using multi-view image inputs.
1 code implementation • 30 May 2019 • Ziheng Zhang, Anpei Chen, Ling Xie, Jingyi Yu, Shenghua Gao
Specifically, we first introduce a new representation, namely a semantics-aware distance map (sem-dist map), to serve as our target for amodal segmentation instead of the commonly used masks and heatmaps.
1 code implementation • ICCV 2019 • Anpei Chen, Zhang Chen, Guli Zhang, Ziheng Zhang, Kenny Mitchell, Jingyi Yu
Our technique employs expression analysis for proxy face geometry generation and combines supervised and unsupervised learning for facial detail synthesis.
no code implementations • 15 Oct 2018 • Anpei Chen, Minye Wu, Yingliang Zhang, Nianyi Li, Jie Lu, Shenghua Gao, Jingyi Yu
A surface light field represents the radiance of rays originating from any points on the surface in any directions.
no code implementations • CVPR 2018 • Xuan Cao, Zhang Chen, Anpei Chen, Xin Chen, Cen Wang, Jingyi Yu
We present a novel 3D face reconstruction technique that leverages sparse photometric stereo (PS) and latest advances on face registration/modeling from a single image.