Revisiting the Power of Prompt for Visual Tuning

4 Feb 2024  ยท  Yuzhu Wang, Lechao Cheng, Chaowei Fang, Dingwen Zhang, Manni Duan, Meng Wang ยท

Visual prompt tuning (VPT) is a promising solution incorporating learnable prompt tokens to customize pre-trained models for downstream tasks. However, VPT and its variants often encounter challenges like prompt initialization, prompt length, and subpar performance in self-supervised pretraining, hindering successful contextual adaptation. This study commences by exploring the correlation evolvement between prompts and patch tokens during proficient training. Inspired by the observation that the prompt tokens tend to share high mutual information with patch tokens, we propose initializing prompts with downstream token prototypes. The strategic initialization, a stand-in for the previous initialization, substantially improves performance in fine-tuning. To refine further, we optimize token construction with a streamlined pipeline that maintains excellent performance with almost no increase in computational expenses compared to VPT. Exhaustive experiments show our proposed approach outperforms existing methods by a remarkable margin. For instance, it surpasses full fine-tuning in 19 out of 24 tasks, using less than 0.4% of learnable parameters on the FGVC and VTAB-1K benchmarks. Notably, our method significantly advances the adaptation for self-supervised pretraining, achieving impressive task performance gains of at least 10% to 30%. Besides, the experimental results demonstrate the proposed SPT is robust to prompt lengths and scales well with model capacity and training data size. We finally provide an insightful exploration into the amount of target data facilitating the adaptation of pre-trained models to downstream tasks.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Visual Prompt Tuning FGVC SPT-Deep(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) Mean Accuracy 86.00 # 1
Visual Prompt Tuning FGVC SPT-Shallow(ViT-B/16_MAE_pretrained_ImageNet-1K) Mean Accuracy 73.95 # 7
Visual Prompt Tuning FGVC SPT-Shallow(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) Mean Accuracy 84.08 # 2
Visual Prompt Tuning FGVC SPT-Deep(ViT-B/16_MAE_pretrained_ImageNet-1K) Mean Accuracy 83.26 # 3
Visual Prompt Tuning VTAB-1k(Natural<7>) SPT-Deep(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) Mean Accuracy 76.20 # 1
Visual Prompt Tuning VTAB-1k(Natural<7>) SPT-Shallow(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) Mean Accuracy 74.47 # 3
Visual Prompt Tuning VTAB-1k(Natural<7>) SPT-Deep(ViT-B/16_MAE_pretrained_ImageNet-1K) Mean Accuracy 67.19 # 6
Visual Prompt Tuning VTAB-1k(Natural<7>) SPT-Shallow(ViT-B/16_MAE_pretrained_ImageNet-1K) Mean Accuracy 62.53 # 7
Visual Prompt Tuning VTAB-1k(Specialized<4>) SPT-Shallow(ViT-B/16_MAE_pretrained_ImageNet-1K) Mean Accuracy 80.90 # 7
Visual Prompt Tuning VTAB-1k(Specialized<4>) SPT-Deep(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) Mean Accuracy 84.95 # 1
Visual Prompt Tuning VTAB-1k(Specialized<4>) SPT-Shallow(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) Mean Accuracy 83.93 # 2
Visual Prompt Tuning VTAB-1k(Specialized<4>) SPT-Deep(ViT-B/16_MAE_pretrained_ImageNet-1K) Mean Accuracy 83.15 # 4
Visual Prompt Tuning VTAB-1k(Structured<8>) SPT-Deep(ViT-B/16_MAE_pretrained_ImageNet-1K) Mean Accuracy 59.23 # 1
Visual Prompt Tuning VTAB-1k(Structured<8>) SPT-Shallow(ViT-B/16_MAE_pretrained_ImageNet-1K) Mean Accuracy 53.46 # 4
Visual Prompt Tuning VTAB-1k(Structured<8>) SPT-Shallow(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) Mean Accuracy 55.16 # 3
Visual Prompt Tuning VTAB-1k(Structured<8>) SPT-Deep(ViT-B/16_MoCo_v3_pretrained_ImageNet-1K) Mean Accuracy 58.36 # 2

Methods


No methods listed for this paper. Add relevant methods here