Hyper-LifelongGAN: Scalable Lifelong Learning for Image Conditioned Generation

CVPR 2021  ·  Mengyao Zhai, Lei Chen, Greg Mori ·

Deep neural networks are susceptible to catastrophic forgetting: when encountering a new task, they can only remember the new task and fail to preserve its ability to accomplish previously learned tasks. In this paper, we study the problem of lifelong learning for generative models and propose a novel and generic continual learning framework Hyper-LifelongGAN which is more scalable compared with state-of-the-art approaches. Given a sequence of tasks, the conventional convolutional filters are factorized into the dynamic base filters which are generated using task specific filter generators, and deterministic weight matrix which linearly combines the base filters and is shared across different tasks. Moreover, the shared weight matrix is multiplied by task specific coefficients to introduce more flexibility in combining task specific base filters differently for different tasks. Attributed to the novel architecture, the proposed method can preserve or even improve the generation quality at a low cost of parameters. We validate Hyper-LifelongGAN on diverse image-conditioned generation tasks, extensive ablation studies and comparisons with state-of-the-art models are carried out to show that the proposed approach can address catastrophic forgetting effectively.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here