|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Per-pixel ground-truth depth data is challenging to acquire at scale.
Accurate depth estimation from images is a fundamental task in many applications including scene understanding and reconstruction.
#3 best model for Monocular Depth Estimation on NYU-Depth V2
In this paper, we address the problem of fast depth estimation on embedded systems.
To the best of our knowledge, this is the first work to show that deep networks trained using unlabelled monocular videos can predict globally scale-consistent camera trajectories over a long video sequence.
#11 best model for Monocular Depth Estimation on KITTI Eigen split
These methods model depth estimation as a regression problem and train the regression networks by minimizing mean squared error, which suffers from slow convergence and unsatisfactory local solutions.
#2 best model for Monocular Depth Estimation on KITTI Eigen split
We address the unsupervised learning of several interconnected problems in low-level vision: single view depth prediction, camera motion estimation, optical flow, and segmentation of a video into the static scene and moving regions.
#20 best model for Monocular Depth Estimation on KITTI Eigen split
We propose a novel appearance-based Object Detection system that is able to detect obstacles at very long range and at a very high speed (~300Hz), without making assumptions on the type of motion.
Despite learning based methods showing promising results in single view depth estimation and visual odometry, most existing approaches treat the tasks in a supervised manner.
Instead of using semantic labels and proxy losses in a multi-task approach, we propose a new architecture leveraging fixed pretrained semantic segmentation networks to guide self-supervised representation learning via pixel-adaptive convolutions.