Tackling Non-forgetting and Forward Transfer with a Unified Lifelong Learning Approach

Humans are the best example of agents that can learn a variety of skills incrementally over the course of their lives, and imbuing machines with this skill is the goal of lifelong machine learning. Ideally, lifelong learning should achieve non-forgetting, forward and backward transfer, avoid confusion, support few-shot learning, and so on. In previous approaches, the focus has been given to subsets of these properties, often by fitting together with an array of separate mechanisms. In this work, we propose a simple yet powerful unified framework that supports almost all of these properties through {\em one} central consolidation mechanism. We then describe a particular instance of this framework designed to support non-forgetting and forward transfer. This novel approach works by efficiently locating sparse neural sub-networks and controlling their consolidation during lifelong learning.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here