1 code implementation • ICCV 2023 • Yu-Ming Tang, Yi-Xing Peng, Wei-Shi Zheng
However, existing prompt-based methods heavily rely on strong pretraining (typically trained on ImageNet-21k), and we find that their models could be trapped if the potential gap between the pretraining task and unknown future tasks is large.
1 code implementation • 3 Feb 2023 • Jiayu Jiao, Yu-Ming Tang, Kun-Yu Lin, Yipeng Gao, Jinhua Ma, YaoWei Wang, Wei-Shi Zheng
In this work, we explore effective Vision Transformers to pursue a preferable trade-off between the computational complexity and size of the attended receptive field.
1 code implementation • CVPR 2022 • Yu-Ming Tang, Yi-Xing Peng, Wei-Shi Zheng
The diverse generated samples could effectively prevent DNN from forgetting when learning new tasks.