Learning to Learn for Few-shot Continual Active Learning

7 Nov 2023  ·  Stella Ho, Ming Liu, Shang Gao, Longxiang Gao ·

Continual learning strives to ensure stability in solving previously seen tasks while demonstrating plasticity in a novel domain. Recent advances in CL are mostly confined to a supervised learning setting, especially in NLP domain. In this work, we consider a few-shot continual active learning (CAL) setting where labeled data are inadequate, and unlabeled data are abundant but with a limited annotation budget. We propose a simple but efficient method, called Meta-Continual Active Learning. Specifically, we employ meta-learning and experience replay to address inter-task confusion and catastrophic forgetting. We further incorporate textual augmentations to ensure generalization. We conduct extensive experiments on benchmark text classification datasets to validate the effectiveness of the proposed method and analyze the effect of different active learning strategies in few-shot CAL setting. Our experimental results demonstrate that random sampling is the best default strategy for active learning and memory sample selection to solve few-shot CAL problems.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods