Image: Choy et al
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We demonstrate the ability of the model in generating 3D volume from a single 2D image with three sets of experiments: (1) learning from single-class objects; (2) learning from multi-class objects and (3) testing on novel object classes.
Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2).
Ranked #4 on 3D Reconstruction on Data3D−R2N2
We propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image.
Ranked #3 on 3D Object Reconstruction on Data3D−R2N2 (Avg F1 metric)
Rendering bridges the gap between 2D vision and 3D scenes by simulating the physical process of image formation.
Ranked #1 on 3D Object Reconstruction on ShapeNet
Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image.
Ranked #2 on 3D Reconstruction on Data3D−R2N2 (using extra training data)
Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones.
In this paper, we propose 3D point-capsule networks, an auto-encoder designed to process sparse 3D point clouds while preserving spatial arrangements of the input data.
Ranked #3 on 3D Object Classification on ModelNet40
A multi-scale context-aware fusion module is then introduced to adaptively select high-quality reconstructions for different parts from all coarse 3D volumes to obtain a fused 3D volume.
Ranked #1 on 3D Object Reconstruction on Data3D−R2N2
Then, a context-aware fusion module is introduced to adaptively select high-quality reconstructions for each part (e. g., table legs) from different coarse 3D volumes to obtain a fused 3D volume.
Ranked #2 on 3D Object Reconstruction on Data3D−R2N2
In this paper, we address the problem of 3D object mesh reconstruction from RGB videos.