no code implementations • 1 Jan 2021 • Hari Sowrirajan, Jing Bo Yang, Andrew Y. Ng, Pranav Rajpurkar
Using 0. 1% of labeled training data, we find that a linear model trained on MoCo-pretrained representations outperforms one trained on representations without MoCo-pretraining by an AUC of 0. 096 (95% CI 0. 061, 0. 130), indicating that MoCo-pretrained representations are of higher quality.