Conditional Prompt Learning for Vision-Language Models

With the rise of powerful pre-trained vision-language models like CLIP, it becomes essential to investigate ways to adapt these models to downstream datasets. A recently proposed method named Context Optimization (CoOp) introduces the concept of prompt learning -- a recent trend in NLP -- to the vision domain for adapting pre-trained vision-language models. Specifically, CoOp turns context words in a prompt into a set of learnable vectors and, with only a few labeled images for learning, can achieve huge improvements over intensively-tuned manual prompts. In our study we identify a critical problem of CoOp: the learned context is not generalizable to wider unseen classes within the same dataset, suggesting that CoOp overfits base classes observed during training. To address the problem, we propose Conditional Context Optimization (CoCoOp), which extends CoOp by further learning a lightweight neural network to generate for each image an input-conditional token (vector). Compared to CoOp's static prompts, our dynamic prompts adapt to each instance and are thus less sensitive to class shift. Extensive experiments show that CoCoOp generalizes much better than CoOp to unseen classes, even showing promising transferability beyond a single dataset; and yields stronger domain generalization performance as well. Code is available at https://github.com/KaiyangZhou/CoOp.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Prompt Engineering Caltech-101 CoCoOp Harmonic mean 95.84 # 8
Prompt Engineering DTD CoCoOp Harmonic mean 64.85 # 8
Prompt Engineering EuroSAT CoCoOp Harmonic mean 71.21 # 8
Prompt Engineering FGVC-Aircraft CoCoOp Harmonic mean 27.74 # 9
Prompt Engineering Food-101 CoCoOp Harmonic mean 90.99 # 7
Prompt Engineering ImageNet CoCoOp Harmonic mean 73.10 # 9
Prompt Engineering ImageNet-A CoCoOp Top-1 accuracy % 50.63 # 5
Prompt Engineering ImageNet-R CoCoOP Top-1 accuracy % 76.18 # 5
Prompt Engineering ImageNet-S CoCoOp Top-1 accuracy % 48.75 # 5
Prompt Engineering ImageNet V2 CoCoOp Top-1 accuracy % 64.07 # 3
Prompt Engineering Oxford 102 Flower CoCoOp Harmonic mean 81.71 # 8
Prompt Engineering Oxford-IIIT Pet Dataset CoCoOp Harmonic mean 96.43 # 5
Prompt Engineering Stanford Cars CoCoOp Harmonic mean 72.01 # 8
Prompt Engineering SUN397 CoCoOp Harmonic mean 78.27 # 8
Prompt Engineering UCF101 CoCoOp Harmonic mean 77.64 # 8

Methods