Self-Supervised Learning

MoCo v2

Introduced by Chen et al. in Improved Baselines with Momentum Contrastive Learning

MoCo v2 is an improved version of the Momentum Contrast self-supervised learning algorithm. Motivated by the findings presented in the SimCLR paper, authors:

  • Replace the 1-layer fully connected layer with a 2-layer MLP head with ReLU for the unsupervised training stage.
  • Include blur augmentation.
  • Use cosine learning rate schedule.

These modifications enable MoCo to outperform the state-of-the-art SimCLR with a smaller batch size and fewer epochs.

Source: Improved Baselines with Momentum Contrastive Learning

Papers


Paper Code Results Date Stars

Categories