Topological Vanilla Transfer Learning

In this paper we investigate the connection of topological similarity between source and target tasks with the efficiency of vanilla transfer learning (i.e., transfer learning without retraining) between them. We discuss that while it is necessary to have strong topological similarity between the source and target tasks, the other direction does not hold (i.e., it is not a sufficient condition). To this extent, we further investigate what can be done in order guarantee efficient feature representation transfer that is needed for such vanilla transfer learning. To answer this, we provide a matrix transformation based homeomorphism (i.e., topology preserving mapping) that significantly improves the transferability measures while keeping the topological properties of the source and target models intact. We prove that while finding such optimal matrix transformation is typically APX-hard, there exists an efficient randomised algorithm that achieves probably correct approximation guarantees. To demonstrate the effectiveness of our approach, we run a number of experiments on transferring features between ImageNet and a number of other datasets (CIFAR-10, CIFAR-100, MNIST, and ISIC 2019) with a variety of pre-trained models (ResNet50, EfficientNetB3, and InceptionV3). These numerical results show that our matrix transformation can increase the performance (measured by F-score) by up to 3-fold.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here