We present an algorithm to estimate fast and accurate depth maps from light fields via a sparse set of depth edges and gradients.
However, when an extraordinary-ray (e-ray) image is restored to acquire stereo images, the existing methods suffer from very severe restoration artifacts in stereo images due to a low signal-to-noise ratio of input e-ray image or depth/deconvolution errors.
Volumetric fusion enables real-time scanning using a conventional RGB-D camera, but its geometry resolution has been limited by the grid resolution of the volumetric distance field and depth registration errors.
We present a method to estimate dense depth by optimizing a sparse set of points such that their diffusion into a depth map minimizes a multi-view reprojection error from RGB supervision.
Previous light field depth estimation methods typically estimate a depth map only for the central sub-aperture view, and struggle with view consistent estimation.
Imaging depth and spectrum have been extensively studied in isolation from each other for decades.
We present a novel method that can enhance the spatial resolution of stereo images using a parallax prior.
Since a user specifies the region to be completed in one of multiview photographs casually taken in a scene, the proposed method enables us to complete the set of photographs with geometric consistency by creating or removing structures on the specified region.