1 code implementation • 20 Oct 2024 • Rohan Saha, Abrar Fahim, Alona Fyshe, Alex Murphy
In such limited data / compute settings, various methods exist aiming to $\textit{do more with less}$, such as finetuning from a pretrained model, modulating difficulty levels as data are presented to a model (curriculum learning), and considering the role of model type / size.
no code implementations • 28 May 2024 • Abrar Fahim, Alex Murphy, Alona Fyshe
We present evidence that attributes this contrastive gap to low uniformity in CLIP space, resulting in embeddings that occupy only a small portion of the latent space.
1 code implementation • 16 Jun 2022 • Abrar Fahim, Mohammed Eunus Ali, Muhammad Aamir Cheema
We achieve the above edge by formulating a multi-objective custom loss function that does not need ground truth labels to quantify the quality of a given data-space partition, making it entirely unsupervised.