15 papers with code • 1 benchmarks • 1 datasets
In this paper, we proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide robust representation for downstream tasks by learning from the data itself.
Ranked #2 on Few-Shot Image Classification on Mini-Imagenet 5-way (5-shot) (using extra training data)
Extensive experiments on the proposed benchmark are performed to evaluate state-of-art meta-learning approaches, transfer learning approaches, and newer methods for cross-domain few-shot learning.
In our final results, we combine the novel method with the baseline method in a simple ensemble, and achieve an average accuracy of 73. 78% on the benchmark.
Ranked #1 on Cross-Domain Few-Shot on miniImagenet
In this paper we reformulate few-shot classification as a reconstruction problem in latent space.
However, when there exists the domain shift between the training tasks and the test tasks, the obtained inductive bias fails to generalize across domains, which degrades the performance of the meta-learning models.
On the few-shot datasets miniImagenet and tieredImagenet with small domain shifts, CHEF is competitive with state-of-the-art methods.
In this paper, we look at the problem of cross-domain few-shot classification that aims to learn a classifier from previously unseen classes and domains with few labeled samples.
The TMHFS method extends the Meta-Confidence Transduction (MCT) and Dense Feature-Matching Networks (DFMN) method  by introducing a new prediction head, i. e, an instance-wise global classification network based on semantic information, after the common feature embedding network.
Recent papers have suggested that transfer learning can outperform sophisticated meta-learning methods for few-shot image classification.