cross-domain few-shot learning

30 papers with code • 1 benchmarks • 1 datasets

Its essence is transfer learning. The model needs to be trained in the source domain and then migrated to the target domain. Compliant with (1) the category in the target domain has never appeared in the source domain (2) the data distribution of the target domain is inconsistent with the source domain (3) each class in the target domain has very few labels

Datasets


Most implemented papers

Modular Adaptation for Cross-Domain Few-Shot Learning

frkl/modular-adaptation 1 Apr 2021

Adapting pre-trained representations has become the go-to recipe for learning new downstream tasks with limited examples.

DAMSL: Domain Agnostic Meta Score-based Learning

johncai117/DAMSL 6 Jun 2021

In this paper, we propose Domain Agnostic Meta Score-based Learning (DAMSL), a novel, versatile and highly effective solution that delivers significant out-performance over state-of-the-art methods for cross-domain few-shot learning.

Dynamic Distillation Network for Cross-Domain Few-Shot Recognition with Unlabeled Data

asrafulashiq/dynamic-cdfsl NeurIPS 2021

As the base dataset and unlabeled dataset are from different domains, projecting the target images in the class-domain of the base dataset with a fixed pretrained model might be sub-optimal.

EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization

ondrejbohdal/evograd NeurIPS 2021

Gradient-based meta-learning and hyperparameter optimization have seen significant progress recently, enabling practical end-to-end training of neural networks together with many hyperparameters.

Meta-FDMixup: Cross-Domain Few-Shot Learning Guided by Labeled Target Data

lovelyqian/Meta-FDMixup 26 Jul 2021

Secondly, a novel disentangle module together with a domain classifier is proposed to extract the disentangled domain-irrelevant and domain-specific features.

Boosting the Generalization Capability in Cross-Domain Few-shot Learning via Noise-enhanced Supervised Autoencoder

Dipeshtamboli/Cross-Domain-FSL-via-NSAE ICCV 2021

State of the art (SOTA) few-shot learning (FSL) methods suffer significant performance drop in the presence of domain differences between source and target datasets.

Wave-SAN: Wavelet based Style Augmentation Network for Cross-Domain Few-Shot Learning

lovelyqian/wave-SAN-CDFSL 15 Mar 2022

The key challenge of CD-FSL lies in the huge data shift between source and target domains, which is typically in the form of totally different visual styles.

Feature Extractor Stacking for Cross-domain Few-shot Learning

hongyujerrywang/featureextractorstacking 12 May 2022

Recently published CDFSL methods generally construct a universal model that combines knowledge of multiple source domains into one feature extractor.

Graph Information Aggregation Cross-Domain Few-Shot Learning for Hyperspectral Image Classification

YuxiangZhang-BIT/IEEE_TNNLS_Gia-CFSL IEEE Transactions on Neural Networks and Learning Systems 2022

The IDE-block is used to characterize and aggregate the intradomain nonlocal relationships and the interdomain feature and distribution similarities are captured in the CSA-block.

Learn-to-Decompose: Cascaded Decomposition Network for Cross-Domain Few-Shot Facial Expression Recognition

zouxinyi0625/cdnet 16 Jul 2022

Extensive experiments on both in-the-lab and in-the-wild compound expression datasets demonstrate the superiority of our proposed CDNet against several state-of-the-art FSL methods.