Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning

NeurIPS 2021  ยท  Zixuan Ke, Bing Liu, Nianzu Ma, Hu Xu, Lei Shu ยท

Continual learning (CL) learns a sequence of tasks incrementally with the goal of achieving two main objectives: overcoming catastrophic forgetting (CF) and encouraging knowledge transfer (KT) across tasks. However, most existing techniques focus only on overcoming CF and have no mechanism to encourage KT, and thus do not do well in KT. Although several papers have tried to deal with both CF and KT, our experiments show that they suffer from serious CF when the tasks do not have much shared knowledge. Another observation is that most current CL methods do not use pre-trained models, but it has been shown that such models can significantly improve the end task performance. For example, in natural language processing, fine-tuning a BERT-like pre-trained language model is one of the most effective approaches. However, for CL, this approach suffers from serious CF. An interesting question is how to make the best use of pre-trained models for CL. This paper proposes a novel model called CTR to solve these problems. Our experimental results demonstrate the effectiveness of CTR

PDF Abstract NeurIPS 2021 PDF NeurIPS 2021 Abstract

Datasets


Introduced in the Paper:

20Newsgroup (10 tasks)

Used in the Paper:

ASC (TIL, 19 tasks) DSC (10 tasks)

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Continual Learning 20Newsgroup (10 tasks) CTR F1 - macro 0.9523 # 1
Continual Learning ASC (19 tasks) Multi-task Learning (MTL; Upper Bound) F1 - macro 0.8811 # 1
Continual Learning ASC (19 tasks) Independent Learning (ONE) F1 - macro 0.7807 # 8
Continual Learning ASC (19 tasks) Naive Continual Learning (NCL) F1 - macro 0.7664 # 10
Continual Learning ASC (19 tasks) CTR F1 - macro 0.8362 # 2
Continual Learning DSC (10 tasks) CTR F1 - macro 0.8875 # 1

Methods


No methods listed for this paper. Add relevant methods here