Ensembles and Encoders for Task-Free Continual Learning

29 Sep 2021  ·  Murray Shanahan, Christos Kaplanis, Jovana Mitrović ·

We present an architecture that is effective for continual learning in an especially demanding setting, where task boundaries do not exist or are unknown, and where classes have to be learned online (with each presented only once). To obtain good performance under these constraints, while mitigating catastrophic forgetting, we exploit recent advances in contrastive, self-supervised learning, allowing us to use a pre-trained, general purpose image encoder whose weights can be frozen, which precludes forgetting. The pre-trained encoder also greatly simplifies the downstream task of classification, which we solve with an ensemble of very simple classifiers. Collectively, the ensemble exhibits much better performance than any individual classifier, an effect which is amplified through specialisation and competitive selection. We assess the performance of the encoders-and-ensembles architecture on standard continual learning benchmarks, where it out-performs prior state-of-the-art by a large margin on the hardest problems, as well as in less familiar settings where the data distribution changes gradually or the classes are presented one at a time.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here