Single-View 3D Reconstruction
42 papers with code • 7 benchmarks • 13 datasets
Datasets
Most implemented papers
SDFDiff: Differentiable Rendering of Signed Distance Fields for 3D Shape Optimization
We propose SDFDiff, a novel approach for image-based shape optimization using differentiable rendering of 3D shapes represented by signed distance functions (SDFs).
Self-supervised Single-view 3D Reconstruction via Semantic Consistency
To the best of our knowledge, we are the first to try and solve the single-view reconstruction problem without a category-specific template mesh or semantic keypoints.
Few-Shot Single-View 3-D Object Reconstruction with Compositional Priors
In this work we demonstrate experimentally that naive baselines do not apply when the goal is to learn to reconstruct novel objects using very few examples, and that in a \emph{few-shot} learning setting, the network must learn concepts that can be applied to new categories, avoiding rote memorization.
Robust Learning Through Cross-Task Consistency
Visual perception entails solving a wide set of tasks (e. g., object detection, depth estimation, etc).
Continuous Object Representation Networks: Novel View Synthesis without Target View Supervision
As a result, current approaches typically rely on supervised training with either ground truth 3D models or multiple target images.
GAMesh: Guided and Augmented Meshing for Deep Point Networks
We present a new meshing algorithm called guided and augmented meshing, GAMesh, which uses a mesh prior to generate a surface for the output points of a point network.
Learning to Recover 3D Scene Shape from a Single Image
Despite significant progress in monocular depth estimation in the wild, recent state-of-the-art methods cannot be used to recover accurate 3D scene shape due to an unknown depth shift induced by shift-invariant reconstruction losses used in mixed-data depth prediction training, and possible unknown camera focal length.
Rethinking Content and Style: Exploring Bias for Unsupervised Disentanglement
From the unsupervised disentanglement perspective, we rethink content and style and propose a formulation for unsupervised C-S disentanglement based on our assumption that different factors are of different importance and popularity for image reconstruction, which serves as a data bias.
An Effective Loss Function for Generating 3D Models from Single 2D Image without Rendering
Then we use Poisson Surface Reconstruction to transform the reconstructed point cloud into a 3D mesh.
PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers
In this paper, we present a new method that reformulates point cloud completion as a set-to-set translation problem and design a new model, called PoinTr that adopts a transformer encoder-decoder architecture for point cloud completion.