MoCo v3 aims to stabilize training of self-supervised ViTs. MoCo v3 is an incremental improvement of MoCo v1/2. Two crops are used for each image under random data augmentation. They are encoded by two encoders $f_q$ and $f_k$ with output vectors $q$ and $k$. $q$ behaves like a "query", where the goal of learning is to retrieve the corresponding "key". The objective is to minimize a contrastive loss function of the following form:
$$ \mathcal{L_q}=-\log \frac{\exp \left(q \cdot k^{+} / \tau\right)}{\exp \left(q \cdot k^{+} / \tau\right)+\sum_{k^{-}} \exp \left(q \cdot k^{-} / \tau\right)} $$
This approach aims to train the Transformer in the contrastive/Siamese paradigm. The encoder $f_q$ consists of a backbone (e.g., ResNet and ViT), a projection head, and an extra prediction head. The encoder $f_k$ has the back the backbone and projection head but not the prediction head. $f_k$ is updated by the moving average of $f_q$, excluding the prediction head.
Source: An Empirical Study of Training Self-Supervised Vision TransformersPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Self-Supervised Learning | 6 | 15.38% |
Semantic Segmentation | 3 | 7.69% |
Self-Supervised Image Classification | 3 | 7.69% |
Image Classification | 2 | 5.13% |
Image Segmentation | 1 | 2.56% |
Medical Image Segmentation | 1 | 2.56% |
Philosophy | 1 | 2.56% |
Image Augmentation | 1 | 2.56% |
Visual Prompt Tuning | 1 | 2.56% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |