8 papers with code • 0 benchmarks • 2 datasets
More surprisingly, they show that the well-trained networks enable scale-consistent predictions over long videos, while the accuracy is still inferior to traditional methods because of ignoring geometric information.
Many applications of Visual SLAM, such as augmented reality, virtual reality, robotics or autonomous driving, require versatile, robust and precise solutions, most often with real-time capability.
The codes and the link for the dataset are publicly available at https://github. com/CapsuleEndoscope/EndoSLAM.
This paper presents a novel end-to-end framework for monocular VO by using deep Recurrent Convolutional Neural Networks (RCNNs).
In this work we present WGANVO, a Deep Learning based monocular Visual Odometry method.