Uncertainty-guided Lifelong Learning in Bayesian Networks

27 Sep 2018  ·  Sayna Ebrahimi, Mohamed Elhoseiny, Trevor Darrell, Marcus Rohrbach ·

Sequentially learning of tasks arriving in a continuous stream is a complex problem and becomes more challenging when the model has a fixed capacity. Lifelong learning aims at learning new tasks without forgetting previously learnt ones as well as freeing up capacity for learning future tasks. We argue that identifying the most influential parameters in a representation learned for one task plays a critical role to decide on \textit{what to remember} for continual learning. Motivated by the statistically-grounded uncertainty defined in Bayesian neural networks, we propose to formulate a Bayesian lifelong learning framework, \texttt{BLLL}, that addresses two lifelong learning directions: 1) completely eliminating catastrophic forgetting using weight pruning, where a hard selection mask freezes the most certain parameters (\texttt{BLLL-PRN}) and 2) reducing catastrophic forgetting by adaptively regularizing the learning rates using the parameter uncertainty (\texttt{BLLL-REG}). While \texttt{BLLL-PRN} is by definition a zero-forgetting guaranteed method, \texttt{BLLL-REG}, despite exhibiting some small forgetting, is a task-agnostic lifelong learner, which does not require to know when a new task arrives. This feature makes \texttt{BLLL-REG} a more convenient candidate for applications such as robotics or on-line learning in which such information is not available. We evaluate our Bayesian learning approaches extensively on diverse object classification datasets in short and long sequences of tasks and perform superior or marginally better than the existing approaches.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here