Dynamic Plane Convolutional Occupancy Networks

11 Nov 2020  ·  Stefan Lionar, Daniil Emtsev, Dusan Svilarkovic, Songyou Peng ·

Learning-based 3D reconstruction using implicit neural representations has shown promising progress not only at the object level but also in more complicated scenes. In this paper, we propose Dynamic Plane Convolutional Occupancy Networks, a novel implicit representation pushing further the quality of 3D surface reconstruction. The input noisy point clouds are encoded into per-point features that are projected onto multiple 2D dynamic planes. A fully-connected network learns to predict plane parameters that best describe the shapes of objects or scenes. To further exploit translational equivariance, convolutional neural networks are applied to process the plane features. Our method shows superior performance in surface reconstruction from unoriented point clouds in ShapeNet as well as an indoor scene dataset. Moreover, we also provide interesting observations on the distribution of learned dynamic planes.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Reconstruction ShapeNet DP-ConvONet IoU 89.5 # 4
Chamfer Distance 0.42 # 4
3D Reconstruction ShapeNet ConvONet IoU 88.4 # 5
Chamfer Distance 0.45 # 5
3D Reconstruction ShapeNet ONet IoU 76.1 # 6
Chamfer Distance 0.87 # 6

Methods


No methods listed for this paper. Add relevant methods here