Particularly, we are the first to provide depth quality evaluation and analysis of tracking results in depth-friendly scenarios in RGBD tracking.
This work introduces a simulator benchmark for vision-based autonomous navigation.
Hierarchical reinforcement learning (HRL) proposes to solve difficult tasks by performing decision-making and control at successively higher levels of temporal abstraction.
RGBD (RGB plus depth) object tracking is gaining momentum as RGBD sensors have become popular in many application fields such as robotics. However, the best RGBD trackers are extensions of the state-of-the-art deep RGB trackers.
We have recently proposed two pile loading controllers that learn from human demonstrations: a neural network (NNet)  and a random forest (RF) controller .
In this work, we propose a deep depth-aware long-term tracker that achieves state-of-the-art RGBD tracking performance and is fast to run.
A long-term visual object tracking performance evaluation methodology and a benchmark are proposed.
The evaluation metric is based on non-parametric probability density that is estimated from samples of a real physical setup.
We consider a single-query 6-DoF camera pose estimation with reference images and a point cloud, i. e. the problem of estimating the position and orientation of a camera by using reference images and a point cloud.
We present a statistical color constancy method that relies on novel gray pixel detection and mean shift clustering.
Depth information provides a strong cue for occlusion detection and handling, but has been largely omitted in generic object tracking until recently due to lack of suitable benchmark datasets and applications.
Successful fine-grained image classification methods learn subtle details between visually similar (sub-)classes, but the problem becomes significantly more challenging if the details are missing due to low resolution.