Rethinking Rehearsal in Lifelong Learning: Does An Example Contribute the Plasticity or Stability?

29 Sep 2021  ·  Qing Sun, Fan Lyu, Fanhua Shang, Wei Feng, Liang Wan ·

Lifelong Learning (LL) is the sequential transformation of Multi-Task Learning, which learns new tasks in order like human-beings. Traditionally, the primary goal of LL is to achieve the trade-off between the Stability (remembering past tasks) and Plasticity (adapting to new tasks). Rehearsal, seeking to remind the model by storing examples from old tasks in LL, is one of the most effective ways to get such trade-off. However, the Stability and Plasticity (SP) are only evaluated when a model is trained well, and it is still unknown what leads to the final SP in rehearsal-based LL. In this paper, we study the cause of SP from the perspective of example difference. First, we theoretically analyze the example-level SP via the influence function and deduce the influence of each example on the final SP. Moreover, to avoid the calculation burden of Hessian for each example, we propose a simple yet effective MetaSP algorithm to simulate the acquisition of example-level SP. Last but not least, we find that by adjusting the weights of each training example, a solution on the SP Pareto front can be obtained, resulting in a better SP trade-off for LL. Empirical results show that our algorithm significantly outperforms state-of-the-art methods on benchmark LL datasets.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here