12 papers with code • 1 benchmarks • 1 datasets
In this paper, we proposed to train a more generalized embedding network with self-supervised learning (SSL) which can provide robust representation for downstream tasks by learning from the data itself.
Ranked #2 on Few-Shot Image Classification on Mini-Imagenet 5-way (5-shot) (using extra training data)
Extensive experiments on the proposed benchmark are performed to evaluate state-of-art meta-learning approaches, transfer learning approaches, and newer methods for cross-domain few-shot learning.
In our final results, we combine the novel method with the baseline method in a simple ensemble, and achieve an average accuracy of 73. 78% on the benchmark.
Ranked #1 on Cross-Domain Few-Shot on miniImagenet
On the few-shot datasets miniImagenet and tieredImagenet with small domain shifts, CHEF is competitive with state-of-the-art methods.
However, when there exists the domain shift between the training tasks and the test tasks, the obtained inductive bias fails to generalize across domains, which degrades the performance of the meta-learning models.
The TMHFS method extends the Meta-Confidence Transduction (MCT) and Dense Feature-Matching Networks (DFMN) method  by introducing a new prediction head, i. e, an instance-wise global classification network based on semantic information, after the common feature embedding network.
Adapting pre-trained representations has become the go-to recipe for learning new downstream tasks with limited examples.