Methods > General > Self-Supervised Learning

Bootstrap Your Own Latent

Introduced by Grill et al. in Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning

BYOL (Bootstrap Your Own Latent) is a new approach to self-supervised learning. BYOL’s goal is to learn a representation $y_θ$ which can then be used for downstream tasks. BYOL uses two neural networks to learn: the online and target networks. The online network is defined by a set of weights $θ$ and is comprised of three stages: an encoder $f_θ$, a projector $g_θ$ and a predictor $q_θ$. The target network has the same architecture as the online network, but uses a different set of weights $ξ$. The target network provides the regression targets to train the online network, and its parameters $ξ$ are an exponential moving average of the online parameters $θ$.

Given the architecture diagram on the right, BYOL minimizes a similarity loss between $q_θ(z_θ)$ and $sg(z'{_ξ})$, where $θ$ are the trained weights, $ξ$ are an exponential moving average of $θ$ and $sg$ means stop-gradient. At the end of training, everything but $f_θ$ is discarded, and $y_θ$ is used as the image representation.

Source: Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning

Image credit: Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning

Source: Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning

Latest Papers

PAPER DATE
Hyperspherically Regularized Networks for BYOL Improves Feature Uniformity and Separability
Aiden DurrantGeorgios Leontidis
2021-04-29
Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples
| Mahmoud AssranMathilde CaronIshan MisraPiotr BojanowskiArmand JoulinNicolas BallasMichael Rabbat
2021-04-28
Leveraging background augmentations to encourage semantic focus in self-supervised contrastive learning
Chaitanya K. RyaliDavid J. SchwabAri S. Morcos
2021-03-23
Self-supervised representation learning from 12-lead ECG data
| Temesgen MehariNils Strodthoff
2021-03-23
BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation
| Daisuke NiizumiDaiki TakeuchiYasunori OhishiNoboru HaradaKunio Kashino
2021-03-11
Shift Equivariance for Pixel-based Self-supervised SAR-optical Feature Fusion
Yuxing ChenLorenzo Bruzzone
2021-03-09
Self-supervised Pretraining of Visual Features in the Wild
| Priya GoyalMathilde CaronBenjamin LefaudeuxMin XuPengchao WangVivek PaiMannat SinghVitaliy LiptchinskyIshan MisraArmand JoulinPiotr Bojanowski
2021-03-02
Bootstrapped Representation Learning on Graphs
Shantanu ThakoorCorentin TallecMohammad Gheshlaghi AzarRémi MunosPetar VeličkovićMichal Valko
2021-02-12
Understanding self-supervised Learning Dynamics without Contrastive Pairs
| Yuandong TianXinlei ChenSurya Ganguli
2021-02-12
Self-Supervised Representation Learning from Flow Equivariance
Yuwen XiongMengye RenWenyuan ZengRaquel Urtasun
2021-01-16
Self-supervised Adversarial Robustness for the Low-label, High-data Regime
Anonymous
2021-01-01
Run Away From your Teacher: a New Self-Supervised Approach Solving the Puzzle of BYOL
Anonymous
2021-01-01
ISD: Self-Supervised Learning by Iterative Similarity Distillation
| Ajinkya TejankarSoroush Abbasi KoohpayeganiVipin PillaiPaolo FavaroHamed Pirsiavash
2020-12-16
Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning
| Jean-bastien GrillFlorian StrubFlorent AltchéCorentin TallecPierre RichemondElena BuchatskayaCarl DoerschBernardo Avila PiresZhaohan GuoMohammad Gheshlaghi AzarBilal PiotKoray KavukcuogluRemi MunosMichal Valko
2020-12-01
How Well Do Self-Supervised Models Transfer?
| Linus EricssonHenry GoukTimothy M. Hospedales
2020-11-26

Components

COMPONENT TYPE
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories