Can Sequential Bayesian Inference Solve Continual Learning?

Previous work in Continual Learning (CL) has used sequential Bayesian inference to prevent forgetting and accumulate knowledge from previous tasks. A limiting factor to performing Bayesian CL has been exact inference in a Bayesian Neural Network (NN). We perform sequential Bayesian inference with a Bayesian NN using Hamiltonian Monte Carlo (HMC) and propagate the posterior as a prior for a new task by fitting a density estimator on HMC samples. We find that this approach fails to prevent forgetting. We propose an alternative view of the CL problem which directly models the data generating process and decomposes the CL problem in task specific and shared parameters. This method named Prototypical Bayesian CL and performs well compared to the latest Bayesian CL methods.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here