MetaFun: Meta-Learning with Iterative Functional Updates

We develop a functional encoder-decoder approach to supervised meta-learning, where labeled data is encoded into an infinite-dimensional functional representation rather than a finite-dimensional one. Furthermore, rather than directly producing the representation, we learn a neural update rule resembling functional gradient descent which iteratively improves the representation. The final representation is used to condition the decoder to make predictions on unlabeled data. Our approach is the first to demonstrates the success of encoder-decoder style meta-learning methods like conditional neural processes on large-scale few-shot classification benchmarks such as miniImageNet and tieredImageNet, where it achieves state-of-the-art performance.

PDF Abstract ICML 2020 PDF
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Few-Shot Image Classification Mini-Imagenet 5-way (1-shot) MetaFun-Attention Accuracy 64.13 # 56
Few-Shot Image Classification Mini-Imagenet 5-way (5-shot) MetaFun-Attention Accuracy 80.82 # 42
Few-Shot Image Classification Tiered ImageNet 5-way (1-shot) MetaFun-Attention Accuracy 67.72 # 38
Few-Shot Image Classification Tiered ImageNet 5-way (5-shot) MetaFun-Kernel Accuracy 83.28 # 35

Methods


No methods listed for this paper. Add relevant methods here