Barlow Twins is a self-supervised learning method that applies redundancy-reduction — a principle first proposed in neuroscience — to self supervised learning. The objective function measures the cross-correlation matrix between the embeddings of two identical networks fed with distorted versions of a batch of samples, and tries to make this matrix close to the identity. This causes the embedding vectors of distorted version of a sample to be similar, while minimizing the redundancy between the components of these vectors. Barlow Twins does not require large batches nor asymmetry between the network twins such as a predictor network, gradient stopping, or a moving average on the weight updates. Intriguingly it benefits from very high-dimensional output vectors.
Source: Barlow Twins: Self-Supervised Learning via Redundancy ReductionPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Self-Supervised Learning | 53 | 30.64% |
Image Classification | 6 | 3.47% |
Semantic Segmentation | 5 | 2.89% |
Benchmarking | 3 | 1.73% |
Active Learning | 3 | 1.73% |
Image Segmentation | 3 | 1.73% |
Language Modelling | 3 | 1.73% |
Disentanglement | 3 | 1.73% |
Domain Adaptation | 3 | 1.73% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |