RGB-D Reconstruction

8 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Efficient Plane-Based Optimization of Geometry and Texture for Indoor RGB-D Reconstruction

chaowang15/plane-opt-rgbd 21 May 2019

We propose a novel approach to reconstruct RGB-D indoor scene based on plane primitives.

DeepDeform: Learning Non-rigid RGB-D Reconstruction with Semi-supervised Data

AljazBozic/DeepDeform 9 Dec 2019

Applying data-driven approaches to non-rigid 3D reconstruction has been difficult, which we believe can be attributed to the lack of a large-scale training corpus.

Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation

jzhzhang/FusionAwareConv CVPR 2020

Online semantic 3D segmentation in company with real-time RGB-D reconstruction poses special challenges such as how to perform 3D convolution directly over the progressively fused 3D geometric data, and how to smartly fuse information from frame to frame.

DeepDeform: Learning Non-Rigid RGB-D Reconstruction With Semi-Supervised Data

AljazBozic/DeepDeform CVPR 2020

Applying data-driven approaches to non-rigid 3D reconstruction has been difficult, which we believe can be attributed to the lack of a large-scale training corpus.

Depth-supervised NeRF: Fewer Views and Faster Training for Free

dunbar12138/DSNeRF CVPR 2022

Crucially, SFM also produces sparse 3D points that can be used as "free" depth supervision during training: we add a loss to encourage the distribution of a ray's terminating depth matches a given 3D keypoint, incorporating depth uncertainty.

Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera

USTC3DV/NDR-code 30 Jun 2022

We propose Neural-DynamicReconstruction (NDR), a template-free method to recover high-fidelity geometry and motions of a dynamic scene from a monocular RGB-D camera.

LiveNVS: Neural View Synthesis on Live RGB-D Streams

fraunhofer-iis/livenvs 28 Nov 2023

Based on the RGB-D input stream, novel views are rendered by projecting neural features into the target view via a densely fused depth map and aggregating the features in image-space to a target feature map.