1 code implementation • ICCV 2023 • Shibo Jie, Haoqing Wang, Zhi-Hong Deng
Current state-of-the-art results in computer vision depend in part on fine-tuning large pre-trained vision models.
no code implementations • 23 Feb 2023 • Ziheng Li, Shibo Jie, Zhi-Hong Deng
In continual learning, model needs to continually learn a feature extractor and classifier on a sequence of tasks.
1 code implementation • 6 Dec 2022 • Shibo Jie, Zhi-Hong Deng
Recent work has explored the potential to adapt a pre-trained vision transformer (ViT) by updating only a few parameters so as to improve storage efficiency, called parameter-efficient transfer learning (PETL).
1 code implementation • 14 Jul 2022 • Shibo Jie, Zhi-Hong Deng
The pretrain-then-finetune paradigm has been widely adopted in computer vision.
no code implementations • 19 May 2022 • Gehui Shen, Shibo Jie, Ziheng Li, Zhi-Hong Deng
In our framework, a generative classifier which utilizes replay memory is used for inference, and the training objective is a pair-based metric learning loss which is proven theoretically to optimize the feature space in a generative way.
1 code implementation • 22 Apr 2022 • Shibo Jie, Zhi-Hong Deng, Ziheng Li
We study a practical setting of continual learning: fine-tuning on a pre-trained model continually.