Our formulation is able to capture global context in a video, thus robust to temporal content change.
In this work, we propose to directly find the one-step solution for the point set registration problem without correspondences.
Existing qualitative spatial calculi of positional information are compared to the new approach and possibilities for future research are outlined.
Underwater robot interventions require a high level of safety and reliability.
We train one conventional and one spherical FCRN for underwater perspective and omni-directional images, respectively.
This paper presents a fully hardware synchronized mapping robot with support for a hardware synchronized external tracking system, for super-precise timing and localization.
In this work, we extend the iFMI method and apply a motion model to estimate an omni-camera's pose when it moves in 3D space.