no code implementations • 9 May 2024 • Monica Welfert, Nathan Stromberg, Lalitha Sankar
Ensuring fair predictions across many distinct subpopulations in the training data can be prohibitive for large models.
no code implementations • 16 Feb 2024 • Nathan Stromberg, Rohan Ayyagari, Monica Welfert, Sanmi Koyejo, Richard Nock, Lalitha Sankar
Existing methods for last layer retraining that aim to optimize worst-group accuracy (WGA) rely heavily on well-annotated groups in the training data.
no code implementations • 27 Oct 2023 • Monica Welfert, Gowtham R. Kurri, Kyle Otstot, Lalitha Sankar
Generalizing this dual-objective formulation using CPE losses, we define and obtain upper bounds on an appropriately defined estimation error.
no code implementations • 28 Feb 2023 • Monica Welfert, Kyle Otstot, Gowtham R. Kurri, Lalitha Sankar
In an effort to address the training instabilities of GANs, we introduce a class of dual-objective GANs with different value functions (objectives) for the generator (G) and discriminator (D).
no code implementations • 12 May 2022 • Gowtham R. Kurri, Monica Welfert, Tyler Sypherd, Lalitha Sankar
We prove a two-way correspondence between the min-max optimization of general CPE loss function GANs and the minimization of associated $f$-divergences.
no code implementations • 27 Sep 2019 • Peter Kairouz, Jiachun Liao, Chong Huang, Maunil Vyas, Monica Welfert, Lalitha Sankar
We present a data-driven framework for learning fair universal representations (FUR) that guarantee statistical fairness for any learning task that may not be known a priori.