Modular Adaptation for Cross-Domain Few-Shot Learning

1 Apr 2021  ·  Xiao Lin, Meng Ye, Yunye Gong, Giedrius Buracas, Nikoletta Basiou, Ajay Divakaran, Yi Yao ·

Adapting pre-trained representations has become the go-to recipe for learning new downstream tasks with limited examples. While literature has demonstrated great successes via representation learning, in this work, we show that substantial performance improvement of downstream tasks can also be achieved by appropriate designs of the adaptation process. Specifically, we propose a modular adaptation method that selectively performs multiple state-of-the-art (SOTA) adaptation methods in sequence. As different downstream tasks may require different types of adaptation, our modular adaptation enables the dynamic configuration of the most suitable modules based on the downstream task. Moreover, as an extension to existing cross-domain 5-way k-shot benchmarks (e.g., miniImageNet -> CUB), we create a new high-way (~100) k-shot benchmark with data from 10 different datasets. This benchmark provides a diverse set of domains and allows the use of stronger representations learned from ImageNet. Experimental results show that by customizing adaptation process towards downstream tasks, our modular adaptation pipeline (MAP) improves 3.1% in 5-shot classification accuracy over baselines of finetuning and Prototypical Networks.

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here