Progressive Self-Distillation for Ground-to-Aerial Perception Knowledge Transfer

29 Aug 2022  ·  Junjie Hu, Chenyou Fan, Mete Ozay, Hua Feng, Yuan Gao, Tin Lun Lam ·

We study a practical yet hasn't been explored problem: how a drone can perceive in an environment from different flight heights. Unlike autonomous driving, where the perception is always conducted from a ground viewpoint, a flying drone may flexibly change its flight height due to specific tasks, requiring the capability for viewpoint invariant perception. Tackling the such problem with supervised learning will incur tremendous costs for data annotation of different flying heights. On the other hand, current semi-supervised learning methods are not effective under viewpoint differences. In this paper, we introduce the ground-to-aerial perception knowledge transfer and propose a progressive semi-supervised learning framework that enables drone perception using only labeled data of ground viewpoint and unlabeled data of flying viewpoints. Our framework has four core components: i) a dense viewpoint sampling strategy that splits the range of vertical flight height into a set of small pieces with evenly-distributed intervals, ii) nearest neighbor pseudo-labeling that infers labels of the nearest neighbor viewpoint with a model learned on the preceding viewpoint, iii) MixView that generates augmented images among different viewpoints to alleviate viewpoint differences, and iv) a progressive distillation strategy to gradually learn until reaching the maximum flying height. We collect a synthesized and a real-world dataset, and we perform extensive experimental analyses to show that our method yields 22.2% and 16.9% accuracy improvement for the synthesized dataset and the real world. Code and datasets are available on https://github.com/FreeformRobotics/Progressive-Self-Distillation-for-Ground-to-Aerial-Perception-Knowledge-Transfer.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods