Shallow Bayesian Meta Learning for Real-World Few-Shot Recognition

Current state-of-the-art few-shot learners focus on developing effective training procedures for feature representations, before using simple, e.g. nearest centroid, classifiers. In this paper, we take an orthogonal approach that is agnostic to the features used and focus exclusively on meta-learning the actual classifier layer. Specifically, we introduce MetaQDA, a Bayesian meta-learning generalization of the classic quadratic discriminant analysis. This setup has several benefits of interest to practitioners: meta-learning is fast and memory-efficient, without the need to fine-tune features. It is agnostic to the off-the-shelf features chosen and thus will continue to benefit from advances in feature representations. Empirically, it leads to robust performance in cross-domain few-shot learning and, crucially for real-world applications, it leads to better uncertainty calibration in predictions.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Few-Shot Image Classification CIFAR-FS 5-way (1-shot) MetaQDA Accuracy 75.83 # 22
Few-Shot Image Classification CIFAR-FS 5-way (5-shot) MetaQDA Accuracy 88.79 # 18
Few-Shot Image Classification Meta-Dataset URT+MQDA Accuracy 74.3 # 6
Few-Shot Image Classification Mini-Imagenet 5-way (1-shot) MetaQDA Accuracy 67.83 # 38
Few-Shot Image Classification Mini-Imagenet 5-way (5-shot) MetaQDA Accuracy 84.28 # 26
Few-Shot Image Classification Tiered ImageNet 5-way (1-shot) MetaQDA Accuracy 74.33 # 20
Few-Shot Image Classification Tiered ImageNet 5-way (5-shot) MetaQDA Accuracy 89.56 # 10

Methods


No methods listed for this paper. Add relevant methods here