cross-domain few-shot learning
15 papers with code • 0 benchmarks • 0 datasets
Its essence is transfer learning. The model needs to be trained in the source domain and then migrated to the target domain. Compliant with (1) the category in the target domain has never appeared in the source domain (2) the data distribution of the target domain is inconsistent with the source domain (3) each class in the target domain has very few labels
These leaderboards are used to track progress in cross-domain few-shot learning
In this paper, we proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide robust representation for downstream tasks by learning from the data itself.
On the few-shot datasets miniImagenet and tieredImagenet with small domain shifts, CHEF is competitive with state-of-the-art methods.
Current state-of-the-art few-shot learners focus on developing effective training procedures for feature representations, before using simple, e. g. nearest centroid, classifiers.
In this paper, we look at the problem of cross-domain few-shot classification that aims to learn a classifier from previously unseen classes and domains with few labeled samples.
We propose a unified look at jointly learning multiple vision tasks and visual domains through universal representations, a single deep neural network.
The TMHFS method extends the Meta-Confidence Transduction (MCT) and Dense Feature-Matching Networks (DFMN) method  by introducing a new prediction head, i. e, an instance-wise global classification network based on semantic information, after the common feature embedding network.
As the base dataset and unlabeled dataset are from different domains, projecting the target images in the class-domain of the base dataset with a fixed pretrained model might be sub-optimal.