SEER is a self-supervised learning approach for training large models on random, uncurated images with no supervision. It trains RegNet-Y architectures with the SwAV. Several adjustments are made to self-supervised training to make it work at a larger scale, including using a cosine learning schedule
Source: Self-supervised Pretraining of Visual Features in the WildPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Question Answering | 2 | 9.09% |
Reinforcement Learning (RL) | 2 | 9.09% |
Survival Analysis | 2 | 9.09% |
In-Context Learning | 1 | 4.55% |
Language Modelling | 1 | 4.55% |
Large Language Model | 1 | 4.55% |
Federated Learning | 1 | 4.55% |
BIG-bench Machine Learning | 1 | 4.55% |
Time-to-Event Prediction | 1 | 4.55% |
Component | Type |
|
---|---|---|
Cosine Annealing
|
Learning Rate Schedules | |
RegNetY
|
Convolutional Neural Networks | |
SwAV
|
Self-Supervised Learning |