|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Rendering bridges the gap between 2D vision and 3D scenes by simulating the physical process of image formation.
Ranked #1 on Single-View 3D Reconstruction on ShapeNet
We advocate the use of implicit fields for learning generative models of shapes and introduce an implicit field decoder, called IM-NET, for shape generation, aimed at improving the visual quality of the generated shapes.
Reconstructing 3D shapes from single-view images has been a long-standing research problem.
Visual perception entails solving a wide set of tasks (e. g., object detection, depth estimation, etc).
Ranked #1 on Surface Normals Estimation on Taskonomy
We propose SDFDiff, a novel approach for image-based shape optimization using differentiable rendering of 3D shapes represented by signed distance functions (SDF).
In this work, we focus on object-level 3D reconstruction and present a geometry-based end-to-end deep learning framework that first detects the mirror plane of reflection symmetry that commonly exists in man-made objects and then predicts depth maps by finding the intra-image pixel-wise correspondence of the symmetry.
We introduce PQ-NET, a deep neural network which represents and generates 3D shapes via sequential part assembly.
Despite significant progress in monocular depth estimation in the wild, recent state-of-the-art methods cannot be used to recover accurate 3D scene shape due to an unknown depth shift induced by shift-invariant reconstruction losses used in mixed-data depth prediction training, and possible unknown camera focal length.