no code implementations • 13 Jun 2024 • Nathan Stromberg, Rohan Ayyagari, Sanmi Koyejo, Richard Nock, Lalitha Sankar
Last-layer retraining methods have emerged as an efficient framework for correcting existing base models.
no code implementations • 9 May 2024 • Monica Welfert, Nathan Stromberg, Lalitha Sankar
Ensuring fair predictions across many distinct subpopulations in the training data can be prohibitive for large models.
no code implementations • 5 May 2024 • Joel Mathias, Rajasekhar Anguluri, Oliver Kosut, Lalitha Sankar
Distributed energy resources (DERs) such as grid-responsive loads and batteries can be harnessed to provide ramping and regulation services across the grid.
no code implementations • 19 Feb 2024 • Obai Bahwal, Oliver Kosut, Lalitha Sankar
Thorough experiments on the synthetic South Carolina 500-bus system highlight that a relatively simpler model such as logistic regression is more susceptible to adversarial attacks than gradient boosting.
no code implementations • 16 Feb 2024 • Nathan Stromberg, Rohan Ayyagari, Monica Welfert, Sanmi Koyejo, Richard Nock, Lalitha Sankar
Existing methods for last layer retraining that aim to optimize worst-group accuracy (WGA) rely heavily on well-annotated groups in the training data.
no code implementations • 29 Dec 2023 • Joshua Inman, Tanmay Khandait, Giulia Pedrielli, Lalitha Sankar
The performance of modern machine learning algorithms depends upon the selection of a set of hyperparameters.
no code implementations • 27 Oct 2023 • Monica Welfert, Gowtham R. Kurri, Kyle Otstot, Lalitha Sankar
Generalizing this dual-objective formulation using CPE losses, we define and obtain upper bounds on an appropriately defined estimation error.
no code implementations • 18 Sep 2023 • Nima Taghipourbazargani, Lalitha Sankar, Oliver Kosut
Using this package, we generate and evaluate eventful PMU data for the South Carolina synthetic network.
no code implementations • 28 Feb 2023 • Monica Welfert, Kyle Otstot, Gowtham R. Kurri, Lalitha Sankar
In an effort to address the training instabilities of GANs, we introduce a class of dual-objective GANs with different value functions (objectives) for the generator (G) and discriminator (D).
no code implementations • 17 Feb 2023 • Tyler Sypherd, Nathan Stromberg, Richard Nock, Visar Berisha, Lalitha Sankar
There is a growing need for models that are interpretable and have reduced energy and computational cost (e. g., in health care analytics and federated learning).
no code implementations • 10 Nov 2022 • Abrar Zahin, Rajasekhar Anguluri, Lalitha Sankar, Oliver Kosut, Gautam Dasarathy
We first characterize the equivalence class up to which general graphs can be recovered in the presence of noise.
no code implementations • 20 Aug 2022 • Wael Alghamdi, Shahab Asoodeh, Flavio P. Calmon, Juan Felipe Gomez, Oliver Kosut, Lalitha Sankar, Fei Wei
SPA approximates privacy guarantees for the composition of DP mechanisms in an accurate and fast manner.
no code implementations • 9 Aug 2022 • Rajasekhar Anguluri, Lalitha Sankar, Oliver Kosut
This ill-conditioning is because of converter-interfaced power systems generators' zero or small inertia contribution.
no code implementations • 25 Jun 2022 • Wael Alghamdi, Shahab Asoodeh, Flavio P. Calmon, Oliver Kosut, Lalitha Sankar, Fei Wei
Since the optimization problem is infinite dimensional, it cannot be solved directly; nevertheless, we quantize the problem to derive near-optimal additive mechanisms that we call "cactus mechanisms" due to their shape.
no code implementations • 5 Jun 2022 • Kyle Otstot, Andrew Yang, John Kevin Cava, Lalitha Sankar
As a step towards addressing both problems simultaneously, we introduce AugLoss, a simple but effective methodology that achieves robustness against both train-time noisy labeling and test-time feature distribution shifts by unifying data augmentation and robust loss functions.
no code implementations • 12 May 2022 • Gowtham R. Kurri, Monica Welfert, Tyler Sypherd, Lalitha Sankar
We prove a two-way correspondence between the min-max optimization of general CPE loss function GANs and the minimization of associated $f$-divergences.
no code implementations • 14 Feb 2022 • Nima T. Bazargani, Gautam Dasarathy, Lalitha Sankar, Oliver Kosut
Using the obtained subset of features, we investigate the performance of two well-known classification models, namely, logistic regression (LR) and support vector machines (SVM) to identify generation loss and line trip events in two datasets.
no code implementations • 8 Jul 2021 • Andrea Pinceti, Lalitha Sankar, Oliver Kosut
A framework for the generation of synthetic time-series transmission-level load data is presented.
no code implementations • 8 Jul 2021 • Andrea Pinceti, Lalitha Sankar, Oliver Kosut
The availability of large datasets is crucial for the development of new power system applications and tools; unfortunately, very few are publicly and freely available.
no code implementations • 18 Jun 2021 • Tyler Sypherd, Richard Nock, Lalitha Sankar
Hence, optimizing a proper loss function on twisted data could perilously lead the learning algorithm towards the twisted posterior, rather than to the desired clean posterior.
no code implementations • 9 Jun 2021 • Gowtham R. Kurri, Tyler Sypherd, Lalitha Sankar
We introduce a tunable GAN, called $\alpha$-GAN, parameterized by $\alpha \in (0,\infty]$, which interpolates between various $f$-GANs and Integral Probability Metric based GANs (under constrained discriminator set).
1 code implementation • 28 Apr 2021 • Zhigang Chu, Andrea Pinceti, Ramin Kaviani, Roozbeh Khodadadeh, Xingpeng Li, Jiazi Zhang, Karthik Saikumar, Mostafa Sahraei-Ardakani, Christopher Mosier, Robin Podmore, Kory Hedman, Oliver Kosut, Lalitha Sankar
In this paper, we investigate the feasibility and physical consequences of cyber attacks against energy management systems (EMS).
no code implementations • 14 Aug 2020 • Shahab Asoodeh, Jiachun Liao, Flavio P. Calmon, Oliver Kosut, Lalitha Sankar
In the first part, we develop a machinery for optimally relating approximate DP to RDP based on the joint range of two $f$-divergences that underlie the approximate DP and RDP.
no code implementations • 22 Jun 2020 • Tyler Sypherd, Mario Diaz, Lalitha Sankar, Gautam Dasarathy
We analyze the optimization landscape of a recently introduced tunable class of loss functions called $\alpha$-loss, $\alpha \in (0,\infty]$, in the logistic model.
no code implementations • 16 Jan 2020 • Shahab Asoodeh, Jiachun Liao, Flavio P. Calmon, Oliver Kosut, Lalitha Sankar
We derive the optimal differential privacy (DP) parameters of a mechanism that satisfies a given level of R\'enyi differential privacy (RDP).
no code implementations • 8 Nov 2019 • Mario Diaz, Peter Kairouz, Jiachun Liao, Lalitha Sankar
Privacy concerns have led to the development of privacy-preserving approaches for learning models from sensitive data.
no code implementations • 27 Sep 2019 • Peter Kairouz, Jiachun Liao, Chong Huang, Maunil Vyas, Monica Welfert, Lalitha Sankar
We present a data-driven framework for learning fair universal representations (FUR) that guarantee statistical fairness for any learning task that may not be known a priori.
1 code implementation • 5 Jun 2019 • Tyler Sypherd, Mario Diaz, John Kevin Cava, Gautam Dasarathy, Peter Kairouz, Lalitha Sankar
We introduce a tunable loss function called $\alpha$-loss, parameterized by $\alpha \in (0,\infty]$, which interpolates between the exponential loss ($\alpha = 1/2$), the log-loss ($\alpha = 1$), and the 0-1 loss ($\alpha = \infty$), for the machine learning setting of classification.
no code implementations • ICLR 2019 • Chong Huang, Xiao Chen, Peter Kairouz, Lalitha Sankar, Ram Rajagopal
We present Generative Adversarial Privacy and Fairness (GAPF), a data-driven framework for learning private and fair representations of the data.
no code implementations • 12 Feb 2019 • Tyler Sypherd, Mario Diaz, Lalitha Sankar, Peter Kairouz
We present $\alpha$-loss, $\alpha \in [1,\infty]$, a tunable loss function for binary classification that bridges log-loss ($\alpha=1$) and $0$-$1$ loss ($\alpha = \infty$).
no code implementations • ICLR 2019 • Chong Huang, Peter Kairouz, Xiao Chen, Lalitha Sankar, Ram Rajagopal
We present a data-driven framework called generative adversarial privacy (GAP).
no code implementations • 26 Oct 2017 • Chong Huang, Peter Kairouz, Xiao Chen, Lalitha Sankar, Ram Rajagopal
On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility.