no code implementations • 21 Apr 2024 • Songlin Dong, Yingjie Chen, Yuhang He, Yuhan Jin, Alex C. Kot, Yihong Gong
Online task-free continual learning (OTFCL) is a more challenging variant of continual learning which emphasizes the gradual shift of task boundaries and learns in an online mode.
no code implementations • 27 Mar 2024 • Shenxing Wei, Xing Wei, Zhiheng Ma, Songlin Dong, Shaochen Zhang, Yihong Gong
Recent research in this domain has emphasized the necessity of a large volume of training data, overlooking the practical scenario where, post-deployment of the model, unlabeled data containing both normal and abnormal samples can be utilized to enhance the model's performance.
no code implementations • 11 Mar 2024 • Xinyuan Gao, Songlin Dong, Yuhang He, Xing Wei, Yihong Gong
Besides, to address the classifier bias towards the new classes, we propose a novel approach to generate the pseudo-features to correct the classifier.
1 code implementation • ICCV 2023 • Songlin Dong, Haoyu Luo, Yuhang He, Xing Wei, Yihong Gong
Current class-incremental learning research mainly focuses on single-label classification tasks while multi-label class-incremental learning (MLCIL) with more practical application scenarios is rarely studied.
no code implementations • CVPR 2023 • Xinyuan Gao, Yuhang He, Songlin Dong, Jie Cheng, Xing Wei, Yihong Gong
Deep neural networks suffer from catastrophic forgetting in class incremental learning, where the classification accuracy of old classes drastically deteriorates when the networks learn the knowledge of new classes.
no code implementations • 11 Mar 2022 • Xiaohan Zhang, Songlin Dong, Jinjie Chen, Qi Tian, Yihong Gong, Xiaopeng Hong
In this paper, we focus on a new and challenging decentralized machine learning paradigm in which there are continuous inflows of data to be addressed and the data are stored in multiple repositories.
1 code implementation • CVPR 2020 • Xiaoyu Tao, Xiaopeng Hong, Xinyuan Chang, Songlin Dong, Xing Wei, Yihong Gong
FSCIL requires CNN models to incrementally learn new classes from very few labelled samples, without forgetting the previously learned ones.
Ranked #8 on Few-Shot Class-Incremental Learning on CIFAR-100 (Average Accuracy metric)