Generative Feature Replay with Orthogonal Weight Modification for Continual Learning

7 May 2020  ·  Gehui Shen, Song Zhang, Xiang Chen, Zhi-Hong Deng ·

The ability of intelligent agents to learn and remember multiple tasks sequentially is crucial to achieving artificial general intelligence. Many continual learning (CL) methods have been proposed to overcome catastrophic forgetting which results from non i.i.d data in the sequential learning of neural networks. In this paper we focus on class incremental learning, a challenging CL scenario. For this scenario, generative replay is a promising strategy which generates and replays pseudo data for previous tasks to alleviate catastrophic forgetting. However, it is hard to train a generative model continually for relatively complex data. Based on recently proposed orthogonal weight modification (OWM) algorithm which can approximately keep previously learned feature invariant when learning new tasks, we propose to 1) replay penultimate layer feature with a generative model; 2) leverage a self-supervised auxiliary task to further enhance the stability of feature. Empirical results on several datasets show our method always achieves substantial improvement over powerful OWM while conventional generative replay always results in a negative effect. Meanwhile our method beats several strong baselines including one based on real data storage. In addition, we conduct experiments to study why our method is effective.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here