1 code implementation • 22 Apr 2024 • Kanglei Zhou, Junlin Li, Ruizhi Cai, Liyuan Wang, Xingxing Zhang, Xiaohui Liang
However, this common strategy yields suboptimal results due to the inherent struggle of these backbones to capture the subtle cues essential for AQA.
1 code implementation • 30 Mar 2024 • HongWei Yan, Liyuan Wang, Kaisheng Ma, Yi Zhong
However, a notable gap from CL to OCL stems from the additional overfitting-underfitting dilemma associated with the use of rehearsal buffers: the inadequate learning of new training samples (underfitting) and the repeated learning of a few old training samples (overfitting).
no code implementations • 7 Mar 2024 • Kanglei Zhou, Liyuan Wang, Xingxing Zhang, Hubert P. H. Shum, Frederick W. B. Li, Jianguo Li, Xiaohui Liang
We propose Continual AQA (CAQA) to refine models using sparse new data.
1 code implementation • 13 Dec 2023 • Ziqi Yuan, Liyuan Wang, Wenbo Ding, Xingxing Zhang, Jiachen Zhong, Jianyong Ai, Jianmin Li, Jun Zhu
A commonly-used strategy for supervised IOD is to encourage the current model (as a student) to mimic the behavior of the old model (as a teacher), but it generally fails in SSIOD because a dominant number of object instances from old and new classes are coexisting and unlabelled, with the teacher only recognizing a fraction of them.
1 code implementation • 6 Dec 2023 • Zheqing Zhu, Rodrigo de Salvo Braz, Jalaj Bhandari, Daniel Jiang, Yi Wan, Yonathan Efroni, Liyuan Wang, Ruiyang Xu, Hongbo Guo, Alex Nikulkov, Dmytro Korenkevych, Urun Dogan, Frank Cheng, Zheng Wu, Wanqiao Xu
Reinforcement Learning (RL) offers a versatile framework for achieving long-term goals.
1 code implementation • 21 Oct 2023 • Liyuan Wang, Jingyi Xie, Xingxing Zhang, Hang Su, Jun Zhu
In this work, we present a general framework for continual learning of sequentially arrived tasks with the use of pre-training, which has emerged as a promising direction for artificial intelligence systems to accommodate real-world dynamics.
1 code implementation • NeurIPS 2023 • Liyuan Wang, Jingyi Xie, Xingxing Zhang, Mingyi Huang, Hang Su, Jun Zhu
Following these empirical and theoretical insights, we propose Hierarchical Decomposition (HiDe-)Prompt, an innovative approach that explicitly optimizes the hierarchical components with an ensemble of task-specific prompts and statistics of both uninstructed and instructed representations, further with the coordination of a contrastive regularization strategy.
1 code implementation • 29 Aug 2023 • Liyuan Wang, Xingxing Zhang, Qian Li, Mingtian Zhang, Hang Su, Jun Zhu, Yi Zhong
Continual learning aims to empower artificial intelligence (AI) with strong adaptability to the real world.
1 code implementation • ICCV 2023 • Gengwei Zhang, Liyuan Wang, Guoliang Kang, Ling Chen, Yunchao Wei
The goal of continual learning is to improve the performance of recognition models in learning sequentially arrived data.
1 code implementation • 31 Jan 2023 • Liyuan Wang, Xingxing Zhang, Hang Su, Jun Zhu
To cope with real-world dynamics, an intelligent system needs to incrementally acquire, update, accumulate, and exploit knowledge throughout its lifetime.
no code implementations • 7 Aug 2022 • Longxiang Jiang, Liyuan Wang, Xinkun Chu, Yonghao Xiao, Hao Zhang
Solving partial differential equations (PDEs) is an important research means in the fields of physics, biology, and chemistry.
2 code implementations • 13 Jul 2022 • Liyuan Wang, Xingxing Zhang, Qian Li, Jun Zhu, Yi Zhong
Continual learning requires incremental compatibility with a sequence of tasks.
1 code implementation • ICLR 2022 • Liyuan Wang, Xingxing Zhang, Kuo Yang, Longhui Yu, Chongxuan Li, Lanqing Hong, Shifeng Zhang, Zhenguo Li, Yi Zhong, Jun Zhu
In this work, we propose memory replay with data compression (MRDC) to reduce the storage cost of old training samples and thus increase their amount that can be stored in the memory buffer.
1 code implementation • NeurIPS 2021 • Liyuan Wang, Mingtian Zhang, Zhongfan Jia, Qian Li, Chenglong Bao, Kaisheng Ma, Jun Zhu, Yi Zhong
Without accessing to the old training samples, knowledge transfer from the old tasks to each new task is difficult to determine, which might be either positive or negative.
no code implementations • 19 Apr 2021 • Liyuan Wang, Qian Li, Yi Zhong, Jun Zhu
Our solution is based on the observation that continual learning of a task sequence inevitably interferes few-shot generalization, which makes it highly nontrivial to extend few-shot learning strategies to continual learning scenarios.
no code implementations • 5 Jan 2021 • Qijun Luo, Zhili Liu, Lanqing Hong, Chongxuan Li, Kuo Yang, Liyuan Wang, Fengwei Zhou, Guilin Li, Zhenguo Li, Jun Zhu
Semi-supervised domain adaptation (SSDA), which aims to learn models in a partially labeled target domain with the assistance of the fully labeled source domain, attracts increasing attention in recent years.
no code implementations • CVPR 2021 • Liyuan Wang, Kuo Yang, Chongxuan Li, Lanqing Hong, Zhenguo Li, Jun Zhu
Continual learning usually assumes the incoming data are fully labeled, which might not be applicable in real applications.
1 code implementation • 6 Mar 2020 • Liyuan Wang, Bo Lei, Qian Li, Hang Su, Jun Zhu, Yi Zhong
Continual acquisition of novel experience without interfering previously learned knowledge, i. e. continual learning, is critical for artificial neural networks, but limited by catastrophic forgetting.