CoOp, or Context Optimization, is an automated prompt engineering method that avoids manual prompt tuning by modeling context words with continuous vectors that are end-to-end learned from data. The context could be shared among all classes or designed to be class-specific. During training, we simply minimize the prediction error using the cross-entropy loss with respect to the learnable context vectors, while keeping the pre-trained parameters fixed. The gradients can be back-propagated all the way through the text encoder, distilling the rich knowledge encoded in the parameters for learning task-relevant context.
Source: Learning to Prompt for Vision-Language ModelsPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Domain Generalization | 6 | 13.33% |
Prompt Engineering | 5 | 11.11% |
Few-Shot Learning | 4 | 8.89% |
Image Classification | 3 | 6.67% |
Zero-Shot Learning | 3 | 6.67% |
Object | 2 | 4.44% |
Object Detection | 2 | 4.44% |
Image Generation | 2 | 4.44% |
Out of Distribution (OOD) Detection | 2 | 4.44% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |