Paper

A Study on Representation Transfer for Few-Shot Learning

Few-shot classification aims to learn to classify new object categories well using only a few labeled examples. Transferring feature representations from other models is a popular approach for solving few-shot classification problems. In this work we perform a systematic study of various feature representations for few-shot classification, including representations learned from MAML, supervised classification, and several common self-supervised tasks. We find that learning from more complex tasks tend to give better representations for few-shot classification, and thus we propose the use of representations learned from multiple tasks for few-shot classification. Coupled with new tricks on feature selection and voting to handle the issue of small sample size, our direct transfer learning method offers performance comparable to state-of-art on several benchmark datasets.

Results in Papers With Code
(↓ scroll down to see all results)