In this paper, we adapt place recognition methods for 3D point clouds into stereo visual odometry.
To the best of our knowledge, this is the first work to show that deep networks trained using unlabelled monocular videos can predict globally scale-consistent camera trajectories over a long video sequence.
#13 best model for Monocular Depth Estimation on KITTI Eigen split (using extra training data)
This paper proposes a novel approach for extending monocular visual odometry to a stereo camera system.
Deep learning-based, single-view depth estimation methods have recently shown highly promising results.
Real-time semantic image segmentation on platforms subject to size, weight and power (SWaP) constraints is a key area of interest for air surveillance and inspection.
For more robust and accurate ego-motion estimation we adds three components to the standard VO pipeline, 1) the hybrid projection model for improved feature matching, 2) multi-view P3P RANSAC algorithm for pose estimation, and 3) online update of rig extrinsic parameters.