As the number of the robot's degrees of freedom increases, the implementation of robot motion becomes more complex and difficult.
We propose a dynamically adaptive kernel-based method for drone detection and tracking using the LiDAR.
Our experiments show that our method can outperform previous unsupervised and semi-supervised depth completion methods in terms of accuracy.
In this paper, we present a method for simultaneous articulation model estimation and segmentation of an articulated object in RGB-D images using human hand motion.
In the experiments, we confirm the parameters obtained by two types of offline calibration according to the degree of freedom of robot movement and validate the effectiveness of online correction method by plotting localized position error during robot's intense movement.