Single-View 3D Reconstruction
43 papers with code • 7 benchmarks • 13 datasets
Datasets
Latest papers with no code
Viewset Diffusion: (0-)Image-Conditioned 3D Generative Models from 2D Data
We fit a diffusion model to a large number of viewsets for a given category of objects.
Self-Supervised Surgical Instrument 3D Reconstruction from a Single Camera Image
An accurate 3D surgical instrument model is a prerequisite for precise predictions of the pose and depth of the instrument.
IC3D: Image-Conditioned 3D Diffusion for Shape Generation
To address this limitation and enhance image-guided 3D DDPMs with augmented 3D understanding, we introduce CISP (Contrastive Image-Shape Pre-training), obtaining a well-structured image-shape joint embedding space.
Inferring Implicit 3D Representations from Human Figures on Pictorial Maps
By assembling all body parts, we derive 2D depth images and body part masks of the whole figure for different views, which are fed into a fully convolutional network to predict UV images.
Few-shot Single-view 3D Reconstruction with Memory Prior Contrastive Network
In this paper, we present a Memory Prior Contrastive Network (MPCN) that can store shape prior knowledge in a few-shot learning based 3D reconstruction framework.
2D GANs Meet Unsupervised Single-view 3D Reconstruction
In light of this, we propose a novel image-conditioned neural implicit field, which can leverage 2D supervisions from GAN-generated multi-view images and perform the single-view reconstruction of generic objects.
Pre-train, Self-train, Distill: A simple recipe for Supersizing 3D Reconstruction
Our final 3D reconstruction model is also capable of zero-shot inference on images from unseen object categories and we empirically show that increasing the number of training categories improves the reconstruction quality.
AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis
In this paper, we address the problem of texture representation for 3D shapes for the challenging and underexplored tasks of texture transfer and synthesis.
Pose Adaptive Dual Mixup for Few-Shot Single-View 3D Reconstruction
We present a pose adaptive few-shot learning procedure and a two-stage data interpolation regularization, termed Pose Adaptive Dual Mixup (PADMix), for single-image 3D reconstruction.
Domain Adaptation for Real-World Single View 3D Reconstruction
Results are performed with ShapeNet as the source domain and domains within the Object Dataset Domain Suite (ODDS) dataset as the target, which is a real world multiview, multidomain image dataset.