1 code implementation • 3 Dec 2024 • Alin Dondera, Anuj Singh, Hadi Jamali-Rad
Masked Autoencoders (MAEs) are an important divide in self-supervised learning (SSL) due to their independence from augmentation techniques for generating positive (and/or negative) pairs as in contrastive frameworks.
1 code implementation • 28 Mar 2024 • Sayak Mukherjee, Andrea Simonetto, Hadi Jamali-Rad
Effective collaboration among heterogeneous clients in a decentralized setting is a rather unexplored avenue in the literature.
1 code implementation • ICLR 2024 • Stylianos Poulakakis-Daktylidis, Hadi Jamali-Rad
Learning quickly from very few labeled samples is a fundamental attribute that separates machines and humans in the era of deep representation learning.
1 code implementation • 5 Dec 2023 • Soroush Abbasi Koohpayegani, Anuj Singh, K L Navaneet, Hamed Pirsiavash, Hadi Jamali-Rad
To achieve this, we adjust the noise level (equivalently, number of diffusion iterations) to ensure the generated image retains low-level and background features from the source image while representing the target category, resulting in a hard negative sample for the source category.
1 code implementation • 25 Oct 2022 • Sieger Falkena, Hadi Jamali-Rad, Jan van Gemert
Binary Neural Networks (BNNs) are receiving an upsurge of attention for bringing power-hungry deep learning towards edge devices.
1 code implementation • 12 Oct 2022 • Ojas Kishorkumar Shirekar, Anuj Singh, Hadi Jamali-Rad
Humans have a unique ability to learn new representations from just a handful of examples with little to no supervision.
Contrastive Learning Unsupervised Few-Shot Image Classification +1
1 code implementation • 22 Aug 2022 • Anuj Singh, Hadi Jamali-Rad
The versatility to learn from a handful of samples is the hallmark of human intelligence.
1 code implementation • 15 Feb 2022 • Ojas Kishore Shirekar, Hadi Jamali-Rad
Unsupervised learning is argued to be the dark matter of human intelligence.
1 code implementation • 29 Mar 2021 • Hadi Jamali-Rad, Mohammad Abdizadeh, Anuj Singh
Classical federated learning approaches incur significant performance degradation in the presence of non-IID client data.
no code implementations • 25 Mar 2021 • Attila Szabo, Hadi Jamali-Rad, Siva-Datta Mannava
Traditional empirical risk minimization (ERM) for semantic segmentation can disproportionately advantage or disadvantage certain target classes in favor of an (unfair but) improved overall performance.
no code implementations • 19 Jun 2020 • Hadi Jamali-Rad, Attila Szabo
Semantic segmentation is one of the most fundamental problems in computer vision with significant impact on a wide variety of applications.