Neural Implicit 3D Shapes from Single Images with Spatial Patterns

6 Jun 2021  ·  Yixin Zhuang, Yunzhe Liu, Yujie Wang, Baoquan Chen ·

Neural implicit functions have achieved impressive results for reconstructing 3D shapes from single images. However, the image features for describing 3D point samplings of implicit functions are less effective when significant variations of occlusions, views, and appearances exist from the image. To better encode image features, we study a geometry-aware convolutional kernel to leverage geometric relationships of point samplings by the proposed \emph{spatial pattern}, i.e., a structured point set. Specifically, the kernel operates at 2D projections of 3D points from the spatial pattern. Supported by the spatial pattern, the 2D kernel encodes geometric information that is crucial for 3D reconstruction tasks, while traditional ones mainly consider appearance information. Furthermore, to enable the network to discover more adaptive spatial patterns for further capturing non-local contextual information, the kernel is devised to be deformable manipulated by a spatial pattern generator. Experimental results on both synthetic and real datasets demonstrate the superiority of the proposed method. Pre-trained models, codes, and data are available at https://github.com/yixin26/SVR-SP.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here