Contrastive Language-Image Pre-training (CLIP), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes.
For pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores.
Image credit: Learning Transferable Visual Models From Natural Language Supervision
Source: Learning Transferable Visual Models From Natural Language SupervisionPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Language Modelling | 45 | 4.16% |
Image Classification | 43 | 3.97% |
Language Modeling | 43 | 3.97% |
Semantic Segmentation | 41 | 3.79% |
Image Generation | 35 | 3.23% |
Zero-Shot Learning | 31 | 2.87% |
Retrieval | 28 | 2.59% |
Image Captioning | 21 | 1.94% |
Visual Question Answering | 21 | 1.94% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |