Continual learning aims to improve the ability of modern learning systems to deal with non-stationary distributions, typically by attempting to learn a series of tasks sequentially.
A major obstacle to developing artificial intelligence applications capable of true lifelong learning is that artificial neural networks quickly or catastrophically forget previously learned tasks when trained on a new one.
Importantly, the benefits of Bayesian principles are preserved: predictive probabilities are well-calibrated, uncertainties on out-of-distribution data are improved, and continual-learning performance is boosted.
Continual learning has received a great deal of attention recently with several approaches being proposed.
Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics.
In the continual learning setting, tasks are encountered sequentially.