Object Reconstruction

42 papers with code • 0 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction

chrischoy/3D-R2N2 2 Apr 2016

Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2).

A Point Set Generation Network for 3D Object Reconstruction from a Single Image

fanhqme/PointSetGeneration CVPR 2017

Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image.

3D Object Reconstruction from Hand-Object Interactions

dimtziwnas/InHandScanningICCV15_Reconstruction ICCV 2015

Recent advances have enabled 3d object reconstruction approaches using a single off-the-shelf RGB-D camera.

Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction

chenhsuanlin/3D-point-cloud-generation 21 Jun 2017

Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones.

Pix2Vox++: Multi-scale Context-aware 3D Object Reconstruction from Single and Multiple Images

hzxie/Pix2Vox 22 Jun 2020

A multi-scale context-aware fusion module is then introduced to adaptively select high-quality reconstructions for different parts from all coarse 3D volumes to obtain a fused 3D volume.

Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision

tensorflow/models NeurIPS 2016

We demonstrate the ability of the model in generating 3D volume from a single 2D image with three sets of experiments: (1) learning from single-class objects; (2) learning from multi-class objects and (3) testing on novel object classes.

3D Object Reconstruction from a Single Depth View with Adversarial Learning

Yang7879/3D-RecGAN 26 Aug 2017

In this paper, we propose a novel 3D-RecGAN approach, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks.

Dense 3D Object Reconstruction from a Single Depth View

Yang7879/3D-RecGAN-extended 1 Feb 2018

Unlike existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN++ only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid with a high resolution of 256^3 by recovering the occluded/missing regions.

CoReNet: Coherent 3D scene reconstruction from a single RGB image

google-research/corenet ECCV 2020

Furthermore, we adapt our model to address the harder task of reconstructing multiple objects from a single image.