PlaneNet: Piece-wise Planar Reconstruction from a Single RGB Image

This paper proposes a deep neural network (DNN) for piece-wise planar depthmap reconstruction from a single RGB image. While DNNs have brought remarkable progress to single-image depth prediction, piece-wise planar depthmap reconstruction requires a structured geometry representation, and has been a difficult task to master even for DNNs. The proposed end-to-end DNN learns to directly infer a set of plane parameters and corresponding plane segmentation masks from a single RGB image. We have generated more than 50,000 piece-wise planar depthmaps for training and testing from ScanNet, a large-scale RGBD video database. Our qualitative and quantitative evaluations demonstrate that the proposed approach outperforms baseline methods in terms of both plane segmentation and depth estimation accuracy. To the best of our knowledge, this paper presents the first end-to-end neural architecture for piece-wise planar reconstruction from a single RGB image. Code and data are available at https://github.com/art-programmer/PlaneNet.

PDF Abstract CVPR 2018 PDF CVPR 2018 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Plane Instance Segmentation NYU Depth v2 PlaneNet RI 0.723 # 2
SC 0.404 # 2
VI 1.932 # 1

Methods


No methods listed for this paper. Add relevant methods here