no code implementations • 27 Oct 2023 • Monica Welfert, Gowtham R. Kurri, Kyle Otstot, Lalitha Sankar
Generalizing this dual-objective formulation using CPE losses, we define and obtain upper bounds on an appropriately defined estimation error.
no code implementations • 28 Feb 2023 • Monica Welfert, Kyle Otstot, Gowtham R. Kurri, Lalitha Sankar
In an effort to address the training instabilities of GANs, we introduce a class of dual-objective GANs with different value functions (objectives) for the generator (G) and discriminator (D).
no code implementations • 5 Jun 2022 • Kyle Otstot, Andrew Yang, John Kevin Cava, Lalitha Sankar
As a step towards addressing both problems simultaneously, we introduce AugLoss, a simple but effective methodology that achieves robustness against both train-time noisy labeling and test-time feature distribution shifts by unifying data augmentation and robust loss functions.