Search Results for author: Da-Wei Zhou

Found 18 papers, 15 papers with code

TV100: A TV Series Dataset that Pre-Trained CLIP Has Not Seen

no code implementations16 Apr 2024 Da-Wei Zhou, Zhi-Hong Qi, Han-Jia Ye, De-Chuan Zhan

The era of pre-trained models has ushered in a wealth of new insights for the machine learning community.

Incremental Learning Novel Class Discovery

Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning

1 code implementation18 Mar 2024 Da-Wei Zhou, Hai-Long Sun, Han-Jia Ye, De-Chuan Zhan

Despite the strong performance of Pre-Trained Models (PTMs) in CIL, a critical issue persists: learning new classes often results in the overwriting of old ones.

Class Incremental Learning Decision Making +1

Continual Learning with Pre-Trained Models: A Survey

2 code implementations29 Jan 2024 Da-Wei Zhou, Hai-Long Sun, Jingyi Ning, Han-Jia Ye, De-Chuan Zhan

Nowadays, real-world applications often face streaming data, which requires the learning system to absorb new knowledge as data evolves.

Continual Learning Fairness

Few-Shot Class-Incremental Learning via Training-Free Prototype Calibration

1 code implementation NeurIPS 2023 Qi-Wei Wang, Da-Wei Zhou, Yi-Kai Zhang, De-Chuan Zhan, Han-Jia Ye

In this Few-Shot Class-Incremental Learning (FSCIL) scenario, existing methods either introduce extra learnable components or rely on a frozen feature extractor to mitigate catastrophic forgetting and overfitting problems.

Few-Shot Class-Incremental Learning Few-Shot Learning +3

PILOT: A Pre-Trained Model-Based Continual Learning Toolbox

1 code implementation13 Sep 2023 Hai-Long Sun, Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan

While traditional machine learning can effectively tackle a wide range of problems, it primarily operates within a closed-world setting, which presents limitations when dealing with streaming data.

Class Incremental Learning Incremental Learning

Streaming CTR Prediction: Rethinking Recommendation Task for Real-World Streaming Data

no code implementations14 Jul 2023 Qi-Wei Wang, Hongyu Lu, Yu Chen, Da-Wei Zhou, De-Chuan Zhan, Ming Chen, Han-Jia Ye

The Click-Through Rate (CTR) prediction task is critical in industrial recommender systems, where models are usually deployed on dynamic streaming data in practical applications.

Click-Through Rate Prediction Recommendation Systems

Learning without Forgetting for Vision-Language Models

no code implementations30 May 2023 Da-Wei Zhou, Yuanhan Zhang, Jingyi Ning, Han-Jia Ye, De-Chuan Zhan, Ziwei Liu

While traditional CIL methods focus on visual information to grasp core features, recent advances in Vision-Language Models (VLM) have shown promising capabilities in learning generalizable representations with the aid of textual information.

Class Incremental Learning Incremental Learning

Preserving Locality in Vision Transformers for Class Incremental Learning

1 code implementation14 Apr 2023 Bowen Zheng, Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan

In this paper, we encourage the model to preserve more local information as the training procedure goes on and devise a Locality-Preserved Attention (LPA) layer to emphasize the importance of local features.

Class Incremental Learning Incremental Learning

Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need

2 code implementations13 Mar 2023 Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan, Ziwei Liu

ADAM is a general framework that can be orthogonally combined with any parameter-efficient tuning method, which holds the advantages of PTM's generalizability and adapted model's adaptivity.

Class Incremental Learning Incremental Learning +1

Deep Class-Incremental Learning: A Survey

3 code implementations7 Feb 2023 Da-Wei Zhou, Qi-Wei Wang, Zhi-Hong Qi, Han-Jia Ye, De-Chuan Zhan, Ziwei Liu

Deep models, e. g., CNNs and Vision Transformers, have achieved impressive achievements in many vision tasks in the closed world.

Class Incremental Learning Image Classification +1

A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning

2 code implementations26 May 2022 Da-Wei Zhou, Qi-Wei Wang, Han-Jia Ye, De-Chuan Zhan

We find that when counting the model size into the total budget and comparing methods with aligned memory size, saving models do not consistently work, especially for the case with limited memory budgets.

Class Incremental Learning Incremental Learning

Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks

1 code implementation31 Mar 2022 Da-Wei Zhou, Han-Jia Ye, Liang Ma, Di Xie, ShiLiang Pu, De-Chuan Zhan

In this work, we propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT), which synthesizes fake FSCIL tasks from the base dataset.

Few-Shot Class-Incremental Learning Incremental Learning +1

Forward Compatible Few-Shot Class-Incremental Learning

1 code implementation CVPR 2022 Da-Wei Zhou, Fu-Yun Wang, Han-Jia Ye, Liang Ma, ShiLiang Pu, De-Chuan Zhan

Forward compatibility requires future new classes to be easily incorporated into the current model based on the current stage data, and we seek to realize it by reserving embedding space for future new classes.

Few-Shot Class-Incremental Learning Incremental Learning

PyCIL: A Python Toolbox for Class-Incremental Learning

1 code implementation23 Dec 2021 Da-Wei Zhou, Fu-Yun Wang, Han-Jia Ye, De-Chuan Zhan

Traditional machine learning systems are deployed under the closed-world setting, which requires the entire training data before the offline training process.

BIG-bench Machine Learning Class Incremental Learning +1

Co-Transport for Class-Incremental Learning

2 code implementations27 Jul 2021 Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan

As a result, we propose CO-transport for class Incremental Learning (COIL), which learns to relate across incremental tasks with the class-wise semantic relationship.

Class Incremental Learning Incremental Learning

Contextualizing Meta-Learning via Learning to Decompose

1 code implementation15 Jun 2021 Han-Jia Ye, Da-Wei Zhou, Lanqing Hong, Zhenguo Li, Xiu-Shen Wei, De-Chuan Zhan

To this end, we propose Learning to Decompose Network (LeadNet) to contextualize the meta-learned ``support-to-target'' strategy, leveraging the context of instances with one or mixed latent attributes in a support set.

Attribute Few-Shot Image Classification +1

Learning Placeholders for Open-Set Recognition

1 code implementation CVPR 2021 Da-Wei Zhou, Han-Jia Ye, De-Chuan Zhan

To this end, we proposed to learn PlaceholdeRs for Open-SEt Recognition (Proser), which prepares for the unknown classes by allocating placeholders for both data and classifier.

Open Set Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.