1 code implementation • 3 Dec 2024 • Alin Dondera, Anuj Singh, Hadi Jamali-Rad
Masked Autoencoders (MAEs) are an important divide in self-supervised learning (SSL) due to their independence from augmentation techniques for generating positive (and/or negative) pairs as in contrastive frameworks.
1 code implementation • 5 Dec 2023 • Soroush Abbasi Koohpayegani, Anuj Singh, K L Navaneet, Hamed Pirsiavash, Hadi Jamali-Rad
To achieve this, we adjust the noise level (equivalently, number of diffusion iterations) to ensure the generated image retains low-level and background features from the source image while representing the target category, resulting in a hard negative sample for the source category.
1 code implementation • 12 Oct 2022 • Ojas Kishorkumar Shirekar, Anuj Singh, Hadi Jamali-Rad
Humans have a unique ability to learn new representations from just a handful of examples with little to no supervision.
Contrastive Learning Unsupervised Few-Shot Image Classification +1
1 code implementation • 22 Aug 2022 • Anuj Singh, Hadi Jamali-Rad
The versatility to learn from a handful of samples is the hallmark of human intelligence.
1 code implementation • 29 Mar 2021 • Hadi Jamali-Rad, Mohammad Abdizadeh, Anuj Singh
Classical federated learning approaches incur significant performance degradation in the presence of non-IID client data.