Recent advances in machine learning have created increasing interest in solving visual computing problems using a class of coordinate-based neural networks that parametrize physical properties of scenes or objects across space and time.
We present an algorithm to estimate fast and accurate depth maps from light fields via a sparse set of depth edges and gradients.
We present a method to estimate dense depth by optimizing a sparse set of points such that their diffusion into a depth map minimizes a multi-view reprojection error from RGB supervision.
Previous light field depth estimation methods typically estimate a depth map only for the central sub-aperture view, and struggle with view consistent estimation.
Many 4D light field processing applications rely on superpixel segmentations, for which occlusion-aware view consistency is important.