MaPLe: Multi-modal Prompt Learning

Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to fine-tune CLIP for downstream tasks. We note that using prompting to adapt representations in a single branch of CLIP (language or vision) is sub-optimal since it does not allow the flexibility to dynamically adjust both representation spaces on a downstream task. In this work, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language branches to improve alignment between the vision and language representations. Our design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions. Further, we learn separate prompts across different early stages to progressively model the stage-wise feature relationships to allow rich context learning. We evaluate the effectiveness of our approach on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts. Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable performance and achieves an absolute gain of 3.45% on novel classes and 2.72% on overall harmonic-mean, averaged over 11 diverse image recognition datasets. Our code and pre-trained models are available at https://github.com/muzairkhattak/multimodal-prompt-learning.

PDF Abstract CVPR 2023 PDF CVPR 2023 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Prompt Engineering Caltech-101 MaPLe Harmonic mean 96.02 # 7
Prompt Engineering DTD MaPLe Harmonic mean 68.16 # 8
Prompt Engineering EuroSAT MaPLe Harmonic mean 82.35 # 6
Prompt Engineering FGVC-Aircraft MaPLe Harmonic mean 36.50 # 7
Prompt Engineering Food-101 MaPLe Harmonic mean 91.38 # 3
Prompt Engineering ImageNet MaPLe Harmonic mean 73.47 # 9
Prompt Engineering ImageNet-A MaPLe Top-1 accuracy % 50.90 # 2
Prompt Engineering ImageNet-R MaPLe Top-1 accuracy % 76.98 # 5
Prompt Engineering ImageNet-S MaPLe Top-1 accuracy % 49.15 # 5
Prompt Engineering ImageNet V2 MaPLe Top-1 accuracy % 64.07 # 3
Prompt Engineering Oxford 102 Flower MaPLe Harmonic mean 82.56 # 8
Prompt Engineering Oxford-IIIT Pet Dataset MaPLe Harmonic mean 96.58 # 4
Prompt Engineering Stanford Cars MaPLe Harmonic mean 73.47 # 8
Prompt Engineering SUN397 MaPLe Harmonic mean 79.75 # 7
Prompt Engineering UCF101 MaPLe Harmonic mean 80.82 # 7

Methods