Sem2NeRF: Converting Single-View Semantic Masks to Neural Radiance Fields

21 Mar 2022  ·  Yuedong Chen, Qianyi Wu, Chuanxia Zheng, Tat-Jen Cham, Jianfei Cai ·

Image translation and manipulation have gain increasing attention along with the rapid development of deep generative models. Although existing approaches have brought impressive results, they mainly operated in 2D space. In light of recent advances in NeRF-based 3D-aware generative models, we introduce a new task, Semantic-to-NeRF translation, that aims to reconstruct a 3D scene modelled by NeRF, conditioned on one single-view semantic mask as input. To kick-off this novel task, we propose the Sem2NeRF framework. In particular, Sem2NeRF addresses the highly challenging task by encoding the semantic mask into the latent code that controls the 3D scene representation of a pre-trained decoder. To further improve the accuracy of the mapping, we integrate a new region-aware learning strategy into the design of both the encoder and the decoder. We verify the efficacy of the proposed Sem2NeRF and demonstrate that it outperforms several strong baselines on two benchmark datasets. Code and video are available at https://donydchen.github.io/sem2nerf/

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D-Aware Image Synthesis CelebAMask-HQ Sem2NeRF FID 41.52 # 1
IS 2.03 # 1
3D-Aware Image Synthesis CelebAMask-HQ pSp FID 55.56 # 2
IS 1.74 # 2
3D-Aware Image Synthesis CelebAMask-HQ pix2pixHD FID 67.32 # 3
IS 1.72 # 3

Methods