Adaptive Cross-Modal Few-Shot Learning

Metric-based meta-learning techniques have successfully been applied to few-shot classification problems. In this paper, we propose to leverage cross-modal information to enhance metric-based few-shot learning methods. Visual and semantic feature spaces have different structures by definition. For certain concepts, visual features might be richer and more discriminative than text ones. While for others, the inverse might be true. Moreover, when the support from visual information is limited in image classification, semantic representations (learned from unsupervised text corpora) can provide strong prior knowledge and context to help learning. Based on these two intuitions, we propose a mechanism that can adaptively combine information from both modalities according to new image categories to be learned. Through a series of experiments, we show that by this adaptive combination of the two modalities, our model outperforms current uni-modality few-shot learning methods and modality-alignment methods by a large margin on all benchmarks and few-shot scenarios tested. Experiments also show that our model can effectively adjust its focus on the two modalities. The improvement in performance is particularly large when the number of shots is very small.

PDF Abstract NeurIPS 2019 PDF NeurIPS 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Few-Shot Image Classification Mini-Imagenet 5-way (10-shot) AM3-TADAM Accuracy 81.57 # 2
Few-Shot Image Classification Mini-Imagenet 5-way (1-shot) AM3-TADAM Accuracy 65.30 # 52
Few-Shot Image Classification Mini-Imagenet 5-way (5-shot) AM3-TADAM Accuracy 78.10 # 55
Few-Shot Image Classification Tiered ImageNet 5-way (1-shot) AM3-TADAM Accuracy 69.08 # 34
Few-Shot Image Classification Tiered ImageNet 5-way (5-shot) AM3-TADAM Accuracy 82.58 # 37

Methods


No methods listed for this paper. Add relevant methods here