7 papers with code • 0 benchmarks • 0 datasets
In this paper, we proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide robust representation for downstream tasks by learning from the data itself.
Ranked #2 on Few-Shot Image Classification on Mini-Imagenet 5-way (5-shot) (using extra training data)
Extensive experiments on the proposed benchmark are performed to evaluate state-of-art meta-learning approaches, transfer learning approaches, and newer methods for cross-domain few-shot learning.
In our final results, we combine the novel method with the baseline method in a simple ensemble, and achieve an average accuracy of 73. 78% on the benchmark.
Ranked #1 on Cross-Domain Few-Shot on miniImagenet
On the few-shot datasets miniImagenet and tieredImagenet with small domain shifts, CHEF is competitive with state-of-the-art methods.
The TMHFS method extends the Meta-Confidence Transduction (MCT) and Dense Feature-Matching Networks (DFMN) method  by introducing a new prediction head, i. e, an instance-wise global classification network based on semantic information, after the common feature embedding network.
Adapting pre-trained representations has become the go-to recipe for learning new downstream tasks with limited examples.
Current state-of-the-art few-shot learners focus on developing effective training procedures for feature representations, before using simple, e. g. nearest centroid, classifiers.