no code implementations • 22 Jan 2024 • Fatema-E Jannat, Sina Gholami, Minhaj Nur Alam, Hamed Tabkhi
Our method addresses the issue using a two-phase training approach that combines self-supervised pretraining and supervised fine-tuning with a mask autoencoder based on the SwinV2 backbone by providing a solution for real-world clinical deployment.
no code implementations • 24 Aug 2022 • Minhaj Nur Alam, Rikiya Yamashita, Vignav Ramesh, Tejas Prabhune, Jennifer I. Lim, R. V. P. Chan, Joelle Hallak, Theodore Leng, Daniel Rubin
CL based pretraining with NST significantly improves DL classification performance, helps the model generalize well (transferable from EyePACS to UIC data), and allows training with small, annotated datasets, therefore reducing ground truth annotation burden of the clinicians.