Search Results for author: Lalitha Sankar

Found 32 papers, 2 papers with code

Theoretical Guarantees of Data Augmented Last Layer Retraining Methods

no code implementations9 May 2024 Monica Welfert, Nathan Stromberg, Lalitha Sankar

Ensuring fair predictions across many distinct subpopulations in the training data can be prohibitive for large models.

Data Augmentation

Model Predictive Control for Joint Ramping and Regulation-Type Service from Distributed Energy Resource Aggregations

no code implementations5 May 2024 Joel Mathias, Rajasekhar Anguluri, Oliver Kosut, Lalitha Sankar

Distributed energy resources (DERs) such as grid-responsive loads and batteries can be harnessed to provide ramping and regulation services across the grid.

Model Predictive Control

An Adversarial Approach to Evaluating the Robustness of Event Identification Models

no code implementations19 Feb 2024 Obai Bahwal, Oliver Kosut, Lalitha Sankar

Thorough experiments on the synthetic South Carolina 500-bus system highlight that a relatively simpler model such as logistic regression is more susceptible to adversarial attacks than gradient boosting.

Adversarial Attack Classification +2

Robustness to Subpopulation Shift with Domain Label Noise via Regularized Annotation of Domains

no code implementations16 Feb 2024 Nathan Stromberg, Rohan Ayyagari, Monica Welfert, Sanmi Koyejo, Richard Nock, Lalitha Sankar

Existing methods for last layer retraining that aim to optimize worst-group accuracy (WGA) rely heavily on well-annotated groups in the training data.

Parameter Optimization with Conscious Allocation (POCA)

no code implementations29 Dec 2023 Joshua Inman, Tanmay Khandait, Giulia Pedrielli, Lalitha Sankar

The performance of modern machine learning algorithms depends upon the selection of a set of hyperparameters.

Addressing GAN Training Instabilities via Tunable Classification Losses

no code implementations27 Oct 2023 Monica Welfert, Gowtham R. Kurri, Kyle Otstot, Lalitha Sankar

Generalizing this dual-objective formulation using CPE losses, we define and obtain upper bounds on an appropriately defined estimation error.

Classification

A Semi-Supervised Approach for Power System Event Identification

no code implementations18 Sep 2023 Nima Taghipourbazargani, Lalitha Sankar, Oliver Kosut

Using this package, we generate and evaluate eventful PMU data for the South Carolina synthetic network.

$(α_D,α_G)$-GANs: Addressing GAN Training Instabilities via Dual Objectives

no code implementations28 Feb 2023 Monica Welfert, Kyle Otstot, Gowtham R. Kurri, Lalitha Sankar

In an effort to address the training instabilities of GANs, we introduce a class of dual-objective GANs with different value functions (objectives) for the generator (G) and discriminator (D).

Smoothly Giving up: Robustness for Simple Models

no code implementations17 Feb 2023 Tyler Sypherd, Nathan Stromberg, Richard Nock, Visar Berisha, Lalitha Sankar

There is a growing need for models that are interpretable and have reduced energy and computational cost (e. g., in health care analytics and federated learning).

Federated Learning regression

Robust Model Selection of Gaussian Graphical Models

no code implementations10 Nov 2022 Abrar Zahin, Rajasekhar Anguluri, Lalitha Sankar, Oliver Kosut, Gautam Dasarathy

We first characterize the equivalence class up to which general graphs can be recovered in the presence of noise.

Model Selection

Parameter Estimation in Ill-conditioned Low-inertia Power Systems

no code implementations9 Aug 2022 Rajasekhar Anguluri, Lalitha Sankar, Oliver Kosut

This ill-conditioning is because of converter-interfaced power systems generators' zero or small inertia contribution.

Connectivity Estimation

Cactus Mechanisms: Optimal Differential Privacy Mechanisms in the Large-Composition Regime

no code implementations25 Jun 2022 Wael Alghamdi, Shahab Asoodeh, Flavio P. Calmon, Oliver Kosut, Lalitha Sankar, Fei Wei

Since the optimization problem is infinite dimensional, it cannot be solved directly; nevertheless, we quantize the problem to derive near-optimal additive mechanisms that we call "cactus mechanisms" due to their shape.

Quantization

AugLoss: A Robust Augmentation-based Fine Tuning Methodology

no code implementations5 Jun 2022 Kyle Otstot, Andrew Yang, John Kevin Cava, Lalitha Sankar

As a step towards addressing both problems simultaneously, we introduce AugLoss, a simple but effective methodology that achieves robustness against both train-time noisy labeling and test-time feature distribution shifts by unifying data augmentation and robust loss functions.

Data Augmentation

$α$-GAN: Convergence and Estimation Guarantees

no code implementations12 May 2022 Gowtham R. Kurri, Monica Welfert, Tyler Sypherd, Lalitha Sankar

We prove a two-way correspondence between the min-max optimization of general CPE loss function GANs and the minimization of associated $f$-divergences.

A Machine Learning Framework for Event Identification via Modal Analysis of PMU Data

no code implementations14 Feb 2022 Nima T. Bazargani, Gautam Dasarathy, Lalitha Sankar, Oliver Kosut

Using the obtained subset of features, we investigate the performance of two well-known classification models, namely, logistic regression (LR) and support vector machines (SVM) to identify generation loss and line trip events in two datasets.

feature selection

Generation of Synthetic Multi-Resolution Time Series Load Data

no code implementations8 Jul 2021 Andrea Pinceti, Lalitha Sankar, Oliver Kosut

The availability of large datasets is crucial for the development of new power system applications and tools; unfortunately, very few are publicly and freely available.

Generative Adversarial Network Time Series +1

Being Properly Improper

no code implementations18 Jun 2021 Tyler Sypherd, Richard Nock, Lalitha Sankar

Hence, optimizing a proper loss function on twisted data could perilously lead the learning algorithm towards the twisted posterior, rather than to the desired clean posterior.

Realizing GANs via a Tunable Loss Function

no code implementations9 Jun 2021 Gowtham R. Kurri, Tyler Sypherd, Lalitha Sankar

We introduce a tunable GAN, called $\alpha$-GAN, parameterized by $\alpha \in (0,\infty]$, which interpolates between various $f$-GANs and Integral Probability Metric based GANs (under constrained discriminator set).

Three Variants of Differential Privacy: Lossless Conversion and Applications

no code implementations14 Aug 2020 Shahab Asoodeh, Jiachun Liao, Flavio P. Calmon, Oliver Kosut, Lalitha Sankar

In the first part, we develop a machinery for optimally relating approximate DP to RDP based on the joint range of two $f$-divergences that underlie the approximate DP and RDP.

On the alpha-loss Landscape in the Logistic Model

no code implementations22 Jun 2020 Tyler Sypherd, Mario Diaz, Lalitha Sankar, Gautam Dasarathy

We analyze the optimization landscape of a recently introduced tunable class of loss functions called $\alpha$-loss, $\alpha \in (0,\infty]$, in the logistic model.

A Better Bound Gives a Hundred Rounds: Enhanced Privacy Guarantees via $f$-Divergences

no code implementations16 Jan 2020 Shahab Asoodeh, Jiachun Liao, Flavio P. Calmon, Oliver Kosut, Lalitha Sankar

We derive the optimal differential privacy (DP) parameters of a mechanism that satisfies a given level of R\'enyi differential privacy (RDP).

Theoretical Guarantees for Model Auditing with Finite Adversaries

no code implementations8 Nov 2019 Mario Diaz, Peter Kairouz, Jiachun Liao, Lalitha Sankar

Privacy concerns have led to the development of privacy-preserving approaches for learning models from sensitive data.

Privacy Preserving

Generating Fair Universal Representations using Adversarial Models

no code implementations27 Sep 2019 Peter Kairouz, Jiachun Liao, Chong Huang, Maunil Vyas, Monica Welfert, Lalitha Sankar

We present a data-driven framework for learning fair universal representations (FUR) that guarantee statistical fairness for any learning task that may not be known a priori.

Fairness Human Activity Recognition

A Tunable Loss Function for Robust Classification: Calibration, Landscape, and Generalization

1 code implementation5 Jun 2019 Tyler Sypherd, Mario Diaz, John Kevin Cava, Gautam Dasarathy, Peter Kairouz, Lalitha Sankar

We introduce a tunable loss function called $\alpha$-loss, parameterized by $\alpha \in (0,\infty]$, which interpolates between the exponential loss ($\alpha = 1/2$), the log-loss ($\alpha = 1$), and the 0-1 loss ($\alpha = \infty$), for the machine learning setting of classification.

Classification General Classification +1

Generative Adversarial Models for Learning Private and Fair Representations

no code implementations ICLR 2019 Chong Huang, Xiao Chen, Peter Kairouz, Lalitha Sankar, Ram Rajagopal

We present Generative Adversarial Privacy and Fairness (GAPF), a data-driven framework for learning private and fair representations of the data.

Fairness

A Tunable Loss Function for Binary Classification

no code implementations12 Feb 2019 Tyler Sypherd, Mario Diaz, Lalitha Sankar, Peter Kairouz

We present $\alpha$-loss, $\alpha \in [1,\infty]$, a tunable loss function for binary classification that bridges log-loss ($\alpha=1$) and $0$-$1$ loss ($\alpha = \infty$).

Binary Classification Classification +2

Generative Adversarial Privacy

no code implementations ICLR 2019 Chong Huang, Peter Kairouz, Xiao Chen, Lalitha Sankar, Ram Rajagopal

We present a data-driven framework called generative adversarial privacy (GAP).

Context-Aware Generative Adversarial Privacy

no code implementations26 Oct 2017 Chong Huang, Peter Kairouz, Xiao Chen, Lalitha Sankar, Ram Rajagopal

On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility.

Cannot find the paper you are looking for? You can Submit a new open access paper.