no code implementations • 26 May 2024 • Neha Kalibhat, Priyatham Kattakinda, Arman Zarei, Nikita Seleznev, Samuel Sharpe, Senthil Kumar, Soheil Feizi
Vision transformers have established a precedent of patchifying images into uniformly-sized chunks before processing.
no code implementations • 8 Mar 2024 • Warren Morningstar, Alex Bijamov, Chris Duvarney, Luke Friedman, Neha Kalibhat, Luyang Liu, Philip Mansfield, Renan Rojas-Gomez, Karan Singhal, Bradley Green, Sushant Prakash
We study the relative effects of data augmentations, pretraining algorithms, and model architectures in Self-Supervised Learning (SSL).
no code implementations • 2 Dec 2023 • Neha Kalibhat, Warren Morningstar, Alex Bijamov, Luyang Liu, Karan Singhal, Philip Mansfield
We define augmentations in frequency space called Fourier Domain Augmentations (FDA) and show that training SSL models on a combination of these and image augmentations can improve the downstream classification accuracy by up to 1. 3% on ImageNet-1K.
no code implementations • 7 Sep 2023 • Neha Kalibhat, Sam Sharpe, Jeremy Goodsitt, Bayan Bruss, Soheil Feizi
Current state-of-the-art self-supervised approaches, are effective when trained on individual domains but show limited generalization on unseen domains.
1 code implementation • 20 Jul 2023 • Neha Kalibhat, Shweta Bhardwaj, Bayan Bruss, Hamed Firooz, Maziar Sanjabi, Soheil Feizi
Although many existing approaches interpret features independently, we observe in state-of-the-art self-supervised and supervised models, that less than 20% of the representation space can be explained by individual features.
no code implementations • 3 Mar 2022 • Neha Kalibhat, Kanika Narang, Hamed Firooz, Maziar Sanjabi, Soheil Feizi
Fine-tuning with Q-Score regularization can boost the linear probing accuracy of SSL models by up to 5. 8% on ImageNet-100 and 3. 7% on ImageNet-1K compared to their baselines.