no code implementations • 29 Feb 2024 • Tina Behnia, Christos Thrampoulidis
Recent findings reveal that over-parameterized deep neural networks, trained beyond zero training-error, exhibit a distinctive structural pattern at the final layer, termed as Neural-collapse (NC).
no code implementations • 13 Jun 2023 • Ganesh Ramachandra Kini, Vala Vakilian, Tina Behnia, Jaidev Gill, Christos Thrampoulidis
Supervised contrastive loss (SCL) is a competitive and often superior alternative to the cross-entropy loss for classification.
1 code implementation • 14 Mar 2023 • Tina Behnia, Ganesh Ramachandra Kini, Vala Vakilian, Christos Thrampoulidis
Aiming to extend this theory to non-linear models, we investigate the implicit geometry of classifiers and embeddings that are learned by different CE parameterizations.
no code implementations • 10 Aug 2022 • Christos Thrampoulidis, Ganesh R. Kini, Vala Vakilian, Tina Behnia
However, we caution that convergence worsens with increasing imbalances.
no code implementations • 25 Jun 2022 • Tina Behnia, Ke Wang, Christos Thrampoulidis
Overparameterized models fail to generalize well in the presence of data imbalance even when combined with traditional techniques for mitigating imbalances.