no code implementations • 14 Feb 2024 • Laura Niss, Kevin Vogt-Lowell, Theodoros Tsiligkaridis
Foundations models are presented as generalists that often perform well over a myriad of tasks.
1 code implementation • 3 Nov 2023 • Kevin Vogt-Lowell, Noah Lee, Theodoros Tsiligkaridis, Marc Vaillant
To address these gaps, we present a new recipe for few-shot fine-tuning of the popular vision-language foundation model CLIP and evaluate its performance on challenging benchmark datasets with realistic distribution shifts from the WILDS collection.