Single-View 3D Reconstruction

42 papers with code • 7 benchmarks • 13 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

SDFDiff: Differentiable Rendering of Signed Distance Fields for 3D Shape Optimization

YueJiang-nj/CVPR2020-SDFDiff CVPR 2020

We propose SDFDiff, a novel approach for image-based shape optimization using differentiable rendering of 3D shapes represented by signed distance functions (SDFs).

Self-supervised Single-view 3D Reconstruction via Semantic Consistency

nvlabs/umr ECCV 2020

To the best of our knowledge, we are the first to try and solve the single-view reconstruction problem without a category-specific template mesh or semantic keypoints.

Few-Shot Single-View 3-D Object Reconstruction with Compositional Priors

JeremyFisher/few_shot_3dr ECCV 2020

In this work we demonstrate experimentally that naive baselines do not apply when the goal is to learn to reconstruct novel objects using very few examples, and that in a \emph{few-shot} learning setting, the network must learn concepts that can be applied to new categories, avoiding rote memorization.

Robust Learning Through Cross-Task Consistency

EPFL-VILAB/XTConsistency CVPR 2020

Visual perception entails solving a wide set of tasks (e. g., object detection, depth estimation, etc).

Continuous Object Representation Networks: Novel View Synthesis without Target View Supervision

nicolaihaeni/corn NeurIPS 2020

As a result, current approaches typically rely on supervised training with either ground truth 3D models or multiple target images.

GAMesh: Guided and Augmented Meshing for Deep Point Networks

nitinagarwal/GAMesh 19 Oct 2020

We present a new meshing algorithm called guided and augmented meshing, GAMesh, which uses a mesh prior to generate a surface for the output points of a point network.

Learning to Recover 3D Scene Shape from a Single Image

aim-uofa/AdelaiDepth CVPR 2021

Despite significant progress in monocular depth estimation in the wild, recent state-of-the-art methods cannot be used to recover accurate 3D scene shape due to an unknown depth shift induced by shift-invariant reconstruction losses used in mixed-data depth prediction training, and possible unknown camera focal length.

Rethinking Content and Style: Exploring Bias for Unsupervised Disentanglement

xrenaa/CS-DisMo 21 Feb 2021

From the unsupervised disentanglement perspective, we rethink content and style and propose a formulation for unsupervised C-S disentanglement based on our assumption that different factors are of different importance and popularity for image reconstruction, which serves as a data bias.

An Effective Loss Function for Generating 3D Models from Single 2D Image without Rendering

NikolaZubic/2dimageto3dmodel 5 Mar 2021

Then we use Poisson Surface Reconstruction to transform the reconstructed point cloud into a 3D mesh.

PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers

yuxumin/PoinTr ICCV 2021

In this paper, we present a new method that reformulates point cloud completion as a set-to-set translation problem and design a new model, called PoinTr that adopts a transformer encoder-decoder architecture for point cloud completion.