Paper

Adjusting Bias in Long Range Stereo Matching: A semantics guided approach

Stereo vision generally involves the computation of pixel correspondences and estimation of disparities between rectified image pairs. In many applications, including simultaneous localization and mapping (SLAM) and 3D object detection, the disparities are primarily needed to calculate depth values and the accuracy of depth estimation is often more compelling than disparity estimation. The accuracy of disparity estimation, however, does not directly translate to the accuracy of depth estimation, especially for faraway objects. In the context of learning-based stereo systems, this is largely due to biases imposed by the choices of the disparity-based loss function and the training data. Consequently, the learning algorithms often produce unreliable depth estimates of foreground objects, particularly at large distances~($>50$m). To resolve this issue, we first analyze the effect of those biases and then propose a pair of novel depth-based loss functions for foreground and background, separately. These loss functions are tunable and can balance the inherent bias of the stereo learning algorithms. The efficacy of our solution is demonstrated by an extensive set of experiments, which are benchmarked against state of the art. We show on KITTI~2015 benchmark that our proposed solution yields substantial improvements in disparity and depth estimation, particularly for objects located at distances beyond 50 meters, outperforming the previous state of the art by $10\%$.

Results in Papers With Code
(↓ scroll down to see all results)