Local descriptor-based multi-prototype network for few-shot Learning

Prototype-based few-shot learning methods are promising in that they are simple yet effective to handle any-shot problems, and many prototype associated works are raised since then. However, these tradi- tional prototype-based methods generally use only one single prototype to represent a class, which essen- tially cannot effectively estimate the complicated distribution of a class. To tackle this problem, we pro- pose a novel Local descriptor-based Multi-Prototype Network (LMPNet) in this paper, a well-designed frame- work that generates an embedding space with multiple prototypes. Specifically, the proposed LMPNet employs local descriptors to represent each image, which can capture more informative and subtler cues of an image than the normally adopted image-level features. Moreover, to alleviate the uncertainty intro- duced by the fixed construction (averaging over samples) of prototypes, we introduce a channel squeeze and spatial excitation (sSE) attention module to learn multiple local descriptor-based prototypes for each class through end-to-end learning. Extensive experiments on both few-shot and fine-grained few-shot image classification tasks have been conducted on various benchmark datasets, including mini ImageNet, tiered ImageNet, Stanford Dogs, Stanford Cars, and CUB-200-2010. The experimental results of our LMPNet on above datasets show tangibly learning performance improvements and distinguishable outcomes over the baseline models.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here