1 code implementation • 14 Mar 2024 • Afrina Tabassum, Dung Tran, Trung Dang, Ismini Lourentzou, Kazuhito Koishida
Masked Autoencoders (MAEs) learn rich low-level representations from unlabeled data but require substantial labeled data to effectively adapt to downstream tasks.
no code implementations • 2 Jun 2022 • Afrina Tabassum, Muntasir Wahed, Hoda Eldardiry, Ismini Lourentzou
One of the challenges in contrastive learning is the selection of appropriate \textit{hard negative} examples, in the absence of label information.
no code implementations • 21 Apr 2022 • Muntasir Wahed, Afrina Tabassum, Ismini Lourentzou
Contrastive learning has gained popularity as an effective self-supervised representation learning technique.