Vision and Language Pre-Trained Models

Contrastive Language-Image Pre-training

Introduced by Radford et al. in Learning Transferable Visual Models From Natural Language Supervision

Contrastive Language-Image Pre-training (CLIP), consisting of a simplified version of ConVIRT trained from scratch, is an efficient method of image representation learning from natural language supervision. , CLIP jointly trains an image encoder and a text encoder to predict the correct pairings of a batch of (image, text) training examples. At test time the learned text encoder synthesizes a zero-shot linear classifier by embedding the names or descriptions of the target dataset’s classes.

For pre-training, CLIP is trained to predict which of the $N X N$ possible (image, text) pairings across a batch actually occurred. CLIP learns a multi-modal embedding space by jointly training an image encoder and text encoder to maximize the cosine similarity of the image and text embeddings of the $N$ real pairs in the batch while minimizing the cosine similarity of the embeddings of the $N^2 - N$ incorrect pairings. A symmetric cross entropy loss is optimized over these similarity scores.

Image credit: Learning Transferable Visual Models From Natural Language Supervision

Source: Learning Transferable Visual Models From Natural Language Supervision

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Classification 49 5.81%
Zero-Shot Learning 41 4.86%
Image Generation 34 4.03%
Language Modelling 32 3.79%
Semantic Segmentation 30 3.55%
Image Captioning 27 3.20%
Object Detection 20 2.37%
Video Retrieval 20 2.37%
Image Retrieval 19 2.25%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories