Long-Term Vehicle Localization by Recursive Knowledge Distillation

7 Apr 2019  ·  Hiroki Tomoe, Tanaka Kanji ·

Most of the current state-of-the-art frameworks for cross-season visual place recognition (CS-VPR) focus on domain adaptation (DA) to a single specific season. From the viewpoint of long-term CS-VPR, such frameworks do not scale well to sequential multiple domains (e.g., spring - summer - autumn - winter - ... ). The goal of this study is to develop a novel long-term ensemble learning (LEL) framework that allows for a constant cost retraining in long-term sequential-multi-domain CS-VPR (SMD-VPR), which only requires the memorization of a small constant number of deep convolutional neural networks (CNNs) and can retrain the CNN ensemble of every season at a small constant time/space cost. We frame our task as the multi-teacher multi-student knowledge distillation (MTMS-KD), which recursively compresses all the previous season's knowledge into a current CNN ensemble. We further address the issue of teacher-student-assignment (TSA) to achieve a good generalization/specialization tradeoff. Experimental results on SMD-VPR tasks validate the efficacy of the proposed approach.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here