A Point Set Generation Network for 3D Object Reconstruction from a Single Image

CVPR 2017  ·  Haoqiang Fan, Hao Su, Leonidas Guibas ·

Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images; however, these representations obscure the natural invariance of 3D shapes under geometric transformations and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output -- point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthodox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3d reconstruction benchmarks; but it also shows a strong performance for 3d shape completion and promising ability in making multiple plausible predictions.

PDF Abstract CVPR 2017 PDF CVPR 2017 Abstract

Datasets


Results from the Paper


Ranked #2 on 3D Reconstruction on Data3D−R2N2 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
3D Reconstruction Data3D−R2N2 PSGN 3DIoU 0.640 # 2
3D Object Reconstruction Data3D−R2N2 PSGN 3DIoU 0.64 # 6
Avg F1 48.58 # 4

Methods


No methods listed for this paper. Add relevant methods here