Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation

Few-shot classification aims to recognize novel categories with only few labeled images in each class. Existing metric-based few-shot classification algorithms predict categories by comparing the feature embeddings of query images with those from a few labeled images (support examples) using a learned metric function. While promising performance has been demonstrated, these methods often fail to generalize to unseen domains due to large discrepancy of the feature distribution across domains. In this work, we address the problem of few-shot classification under domain shifts for metric-based methods. Our core idea is to use feature-wise transformation layers for augmenting the image features using affine transforms to simulate various feature distributions under different domains in the training stage. To capture variations of the feature distributions under different domains, we further apply a learning-to-learn approach to search for the hyper-parameters of the feature-wise transformation layers. We conduct extensive experiments and ablation studies under the domain generalization setting using five few-shot classification datasets: mini-ImageNet, CUB, Cars, Places, and Plantae. Experimental results demonstrate that the proposed feature-wise transformation layer is applicable to various metric-based models, and provides consistent improvements on the few-shot classification performance under domain shift.

PDF Abstract ICLR 2020 PDF ICLR 2020 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Cross-Domain Few-Shot cars FWT 5 shot 44.90 # 8
Cross-Domain Few-Shot ChestX FWT 5 shot 25.18 # 6
Cross-Domain Few-Shot CropDisease FWT 5 shot 87.11 # 8
Cross-Domain Few-Shot CUB FWT 5 shot 66.98 # 6
Cross-Domain Few-Shot EuroSAT FWT 5 shot 83.01 # 8
Cross-Domain Few-Shot ISIC2018 FWT 5 shot 43.17 # 10
Cross-Domain Few-Shot Places FWT 5 shot 73.94 # 7
Cross-Domain Few-Shot Plantae FWT 5 shot 53.85 # 7

Methods


No methods listed for this paper. Add relevant methods here