Cross-domain few-shot learning with unlabelled data

19 Jan 2021  ·  Fupin Yao ·

Few shot learning aims to solve the data scarcity problem. If there is a domain shift between the test set and the training set, their performance will decrease a lot. This setting is called Cross-domain few-shot learning. However, this is very challenging because the target domain is unseen during training. Thus we propose a new setting some unlabelled data from the target domain is provided, which can bridge the gap between the source domain and the target domain. A benchmark for this setting is constructed using DomainNet \cite{peng2018oment}. We come up with a self-supervised learning method to fully utilize the knowledge in the labeled training set and the unlabelled set. Extensive experiments show that our methods outperforms several baseline methods by a large margin. We also carefully design an episodic training pipeline which yields a significant performance boost.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here