In this work, we focus on precise 3D track state estimation and propose a learning-based approach for object-centric relative motion estimation of partially observed objects.
We present the collections of images of the same rotating plastic object made in X-ray and visible spectra.
Our tracker achieves leading performance in OTB2013, OTB2015, VOT2015, VOT2016 and LaSOT, and operates at a real-time speed of 26 FPS, which indicates our method is effective and practical.
In this paper, we propose a state-of-the-art video denoising algorithm based on a convolutional neural network architecture.
We present an approach which takes advantage of both structure and semantics for unsupervised monocular learning of depth and ego-motion.
To our knowledge, this is the first deep learning based solution to the problem of dynamic obstacle avoidance using event cameras on a quadrotor.
We reconstruct a set of non-linear factors that make an optimal approximation of the information on the trajectory accumulated by VIO.
Depth estimation is an important capability for autonomous vehicles to understand and reconstruct 3D environments as well as avoid obstacles during the execution.
For more robust and accurate ego-motion estimation we adds three components to the standard VO pipeline, 1) the hybrid projection model for improved feature matching, 2) multi-view P3P RANSAC algorithm for pose estimation, and 3) online update of rig extrinsic parameters.
Accurate relative pose is one of the key components in visual odometry (VO) and simultaneous localization and mapping (SLAM).