Inferring Point Clouds from Single Monocular Images by Depth Intermediation

4 Dec 2018  ·  Wei Zeng, Sezer Karaoglu, Theo Gevers ·

In this paper, we propose a pipeline to generate 3D point cloud of an object from a single-view RGB image. Most previous work predict the 3D point coordinates from single RGB images directly. We decompose this problem into depth estimation from single images and point cloud completion from partial point clouds. Our method sequentially predicts the depth maps from images and then infers the complete 3D object point clouds based on the predicted partial point clouds. We explicitly impose the camera model geometrical constraint in our pipeline and enforce the alignment of the generated point clouds and estimated depth maps. Experimental results for the single image 3D object reconstruction task show that the proposed method outperforms existing state-of-the-art methods. Both the qualitative and quantitative results demonstrate the generality and suitability of our method.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here