Cross-Domain Few-Shot
55 papers with code • 9 benchmarks • 6 datasets
Libraries
Use these libraries to find Cross-Domain Few-Shot models and implementationsLatest papers
Cross-domain Multi-modal Few-shot Object Detection via Rich Text
Cross-modal feature extraction and integration have led to steady performance improvements in few-shot learning tasks due to generating richer features.
Discriminative Sample-Guided and Parameter-Efficient Feature Space Adaptation for Cross-Domain Few-Shot Learning
In this paper, we look at cross-domain few-shot classification which presents the challenging task of learning new classes in previously unseen domains with few labelled examples.
Enhancing Information Maximization with Distance-Aware Contrastive Learning for Source-Free Cross-Domain Few-Shot Learning
For this reason, this paper explores a Source-Free CDFSL (SF-CDFSL) problem, in which CDFSL is addressed through the use of existing pretrained models instead of training a model with source data, avoiding accessing source data.
Adapt Before Comparison: A New Perspective on Cross-Domain Few-Shot Segmentation
Few-shot segmentation performance declines substantially when facing images from a domain different than the training domain, effectively limiting real-world use cases.
Cross-Domain Few-Shot Learning via Adaptive Transformer Networks
Most few-shot learning works rely on the same domain assumption between the base and the target tasks, hindering their practical applications.
Cross-Domain Few-Shot Segmentation via Iterative Support-Query Correspondence Mining
Cross-Domain Few-Shot Segmentation (CD-FSS) poses the challenge of segmenting novel categories from a distinct domain using only limited exemplars.
Leveraging Normalization Layer in Adapters With Progressive Learning and Adaptive Distillation for Cross-Domain Few-Shot Learning
Second, to address the pitfalls of noisy statistics, we deploy two strategies: a progressive training of the two adapters and an adaptive distillation technique derived from features determined by the model solely with the adapter devoid of a normalization layer.
Improving Cross-domain Few-shot Classification with Multilayer Perceptron
Multilayer perceptron (MLP) has shown its capability to learn transferable representations in various downstream tasks, such as unsupervised image classification and supervised concept generalization.
Cross-Level Distillation and Feature Denoising for Cross-Domain Few-Shot Classification
The conventional few-shot classification aims at learning a model on a large labeled base dataset and rapidly adapting to a target dataset that is from the same distribution as the base dataset.
Multi-level Relation Learning for Cross-domain Few-shot Hyperspectral Image Classification
In addition, it adopts a transformer based cross-attention learning module to learn the set-level sample relations and acquire the attention from query samples to support samples.