no code implementations • 16 Apr 2024 • Da-Wei Zhou, Zhi-Hong Qi, Han-Jia Ye, De-Chuan Zhan
The era of pre-trained models has ushered in a wealth of new insights for the machine learning community.
1 code implementation • 18 Mar 2024 • Da-Wei Zhou, Hai-Long Sun, Han-Jia Ye, De-Chuan Zhan
Despite the strong performance of Pre-Trained Models (PTMs) in CIL, a critical issue persists: learning new classes often results in the overwriting of old ones.
2 code implementations • 29 Jan 2024 • Da-Wei Zhou, Hai-Long Sun, Jingyi Ning, Han-Jia Ye, De-Chuan Zhan
Nowadays, real-world applications often face streaming data, which requires the learning system to absorb new knowledge as data evolves.
1 code implementation • NeurIPS 2023 • Qi-Wei Wang, Da-Wei Zhou, Yi-Kai Zhang, De-Chuan Zhan, Han-Jia Ye
In this Few-Shot Class-Incremental Learning (FSCIL) scenario, existing methods either introduce extra learnable components or rely on a frozen feature extractor to mitigate catastrophic forgetting and overfitting problems.
1 code implementation • 13 Sep 2023 • Hai-Long Sun, Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan
While traditional machine learning can effectively tackle a wide range of problems, it primarily operates within a closed-world setting, which presents limitations when dealing with streaming data.
no code implementations • 14 Jul 2023 • Qi-Wei Wang, Hongyu Lu, Yu Chen, Da-Wei Zhou, De-Chuan Zhan, Ming Chen, Han-Jia Ye
The Click-Through Rate (CTR) prediction task is critical in industrial recommender systems, where models are usually deployed on dynamic streaming data in practical applications.
no code implementations • 30 May 2023 • Da-Wei Zhou, Yuanhan Zhang, Jingyi Ning, Han-Jia Ye, De-Chuan Zhan, Ziwei Liu
While traditional CIL methods focus on visual information to grasp core features, recent advances in Vision-Language Models (VLM) have shown promising capabilities in learning generalizable representations with the aid of textual information.
1 code implementation • 14 Apr 2023 • Bowen Zheng, Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan
In this paper, we encourage the model to preserve more local information as the training procedure goes on and devise a Locality-Preserved Attention (LPA) layer to emphasize the importance of local features.
2 code implementations • 13 Mar 2023 • Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan, Ziwei Liu
ADAM is a general framework that can be orthogonally combined with any parameter-efficient tuning method, which holds the advantages of PTM's generalizability and adapted model's adaptivity.
3 code implementations • 7 Feb 2023 • Da-Wei Zhou, Qi-Wei Wang, Zhi-Hong Qi, Han-Jia Ye, De-Chuan Zhan, Ziwei Liu
Deep models, e. g., CNNs and Vision Transformers, have achieved impressive achievements in many vision tasks in the closed world.
2 code implementations • 26 May 2022 • Da-Wei Zhou, Qi-Wei Wang, Han-Jia Ye, De-Chuan Zhan
We find that when counting the model size into the total budget and comparing methods with aligned memory size, saving models do not consistently work, especially for the case with limited memory budgets.
2 code implementations • 10 Apr 2022 • Fu-Yun Wang, Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan
The ability to learn new concepts continually is necessary in this ever-changing world.
Ranked #1 on Incremental Learning on ImageNet100 - 20 steps
1 code implementation • 31 Mar 2022 • Da-Wei Zhou, Han-Jia Ye, Liang Ma, Di Xie, ShiLiang Pu, De-Chuan Zhan
In this work, we propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT), which synthesizes fake FSCIL tasks from the base dataset.
Ranked #5 on Few-Shot Class-Incremental Learning on CIFAR-100
1 code implementation • CVPR 2022 • Da-Wei Zhou, Fu-Yun Wang, Han-Jia Ye, Liang Ma, ShiLiang Pu, De-Chuan Zhan
Forward compatibility requires future new classes to be easily incorporated into the current model based on the current stage data, and we seek to realize it by reserving embedding space for future new classes.
Ranked #4 on Few-Shot Class-Incremental Learning on CIFAR-100
1 code implementation • 23 Dec 2021 • Da-Wei Zhou, Fu-Yun Wang, Han-Jia Ye, De-Chuan Zhan
Traditional machine learning systems are deployed under the closed-world setting, which requires the entire training data before the offline training process.
2 code implementations • 27 Jul 2021 • Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan
As a result, we propose CO-transport for class Incremental Learning (COIL), which learns to relate across incremental tasks with the class-wise semantic relationship.
1 code implementation • 15 Jun 2021 • Han-Jia Ye, Da-Wei Zhou, Lanqing Hong, Zhenguo Li, Xiu-Shen Wei, De-Chuan Zhan
To this end, we propose Learning to Decompose Network (LeadNet) to contextualize the meta-learned ``support-to-target'' strategy, leveraging the context of instances with one or mixed latent attributes in a support set.
1 code implementation • CVPR 2021 • Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan
To this end, we proposed to learn PlaceholdeRs for Open-SEt Recognition (Proser), which prepares for the unknown classes by allocating placeholders for both data and classifier.