1 code implementation • 23 Jan 2025 • Junhao Zheng, Xidi Cai, Shengjie Qiu, Qianli Ma
Recent advancements in large language models (LLMs) reveal a perplexing phenomenon in continual learning: despite extensive training, models experience significant performance declines, raising questions about task alignment and underlying knowledge retention.
1 code implementation • 10 Jun 2024 • Junhao Zheng, Shengjie Qiu, Chengming Shi, Qianli Ma
This survey delves into the sophisticated landscape of lifelong learning, categorizing strategies into two primary groups: Internal Knowledge and External Knowledge.
2 code implementations • 16 Feb 2024 • Shengjie Qiu, Junhao Zheng, Zhen Liu, Yicheng Luo, Qianli Ma
As for the E2O problem, we use knowledge distillation to maintain the model's discriminative ability for old entities.
2 code implementations • 13 Feb 2024 • Junhao Zheng, Shengjie Qiu, Qianli Ma
The concepts in Concept-1K are discrete, interpretable units of knowledge that allow for fine-grained analysis of learning and forgetting processes.
2 code implementations • 13 Dec 2023 • Junhao Zheng, Shengjie Qiu, Qianli Ma
Most assume that catastrophic forgetting is the biggest obstacle to achieving superior IL performance and propose various techniques to overcome this issue.
2 code implementations • 19 Jun 2023 • Junhao Zheng, Qianli Ma, Shengjie Qiu, Yue Wu, Peitian Ma, Junlong Liu, Huawen Feng, Xichen Shang, Haibin Chen
Intriguingly, the unified objective can be seen as the sum of the vanilla fine-tuning objective, which learns new knowledge from target data, and the causal objective, which preserves old knowledge from PLMs.