no code implementations • 17 Feb 2024 • Andrew Lowy, Jonathan Ullman, Stephen J. Wright
We use this framework to obtain improved, and sometimes optimal, rates for several classes of non-convex loss functions.
no code implementations • 14 Feb 2024 • Andrew Lowy, Zhuohang Li, Jing Liu, Toshiaki Koike-Akino, Kieran Parsons, Ye Wang
In practical applications, such a worst-case guarantee may be overkill: practical attackers may lack exact knowledge of (nearly all of) the private data, and our data set might be easier to defend, in some sense, than the worst-case data set.
1 code implementation • 26 Jun 2023 • Andrew Lowy, Zeman Li, Tianjian Huang, Meisam Razaviyayn
We show that the optimal error rates can be attained (up to log factors) by either discarding private data and training a public model, or treating public data like it is private and using an optimal DP algorithm.
1 code implementation • 17 Oct 2022 • Andrew Lowy, Devansh Gupta, Meisam Razaviyayn
However, existing algorithms for DP fair learning are either not guaranteed to converge or require full batch of data in each iteration of the algorithm to converge.
no code implementations • 15 Sep 2022 • Andrew Lowy, Meisam Razaviyayn
To address these limitations, this work provides near-optimal excess risk bounds that do not depend on the uniform Lipschitz parameter of the loss.
1 code implementation • 13 Mar 2022 • Andrew Lowy, Ali Ghafelebashi, Meisam Razaviyayn
silo data and two classes of Lipschitz continuous loss functions: First, we consider losses satisfying the Proximal Polyak-Lojasiewicz (PL) inequality, which is an extension of the classical PL condition to the constrained setting.
2 code implementations • 17 Jun 2021 • Andrew Lowy, Meisam Razaviyayn
This paper studies federated learning (FL)--especially cross-silo FL--with data from people who do not trust the server or other silos.
1 code implementation • NeurIPS 2021 • Andrew Lowy, Sina Baharlouei, Rakesh Pavan, Meisam Razaviyayn, Ahmad Beirami
We consider the problem of fair classification with discrete sensitive attributes and potentially large models and data sets, requiring stochastic solvers.
no code implementations • 9 Feb 2021 • Andrew Lowy, Meisam Razaviyayn
Finally, we apply our theory to two learning frameworks: tilted ERM and adversarial learning.
no code implementations • 1 Jan 2021 • Rakesh Pavan, Andrew Lowy, Sina Baharlouei, Meisam Razaviyayn, Ahmad Beirami
In this paper, we propose another notion of fairness violation, called Exponential Rényi Mutual Information (ERMI) between sensitive attributes and the predicted target.
no code implementations • 18 Feb 2020 • Dmitrii M. Ostrovskii, Andrew Lowy, Meisam Razaviyayn
As a byproduct, the choice $\varepsilon_y = O(\varepsilon_x{}^2)$ allows for the $O(\varepsilon_x{}^{-3})$ complexity of finding an $\varepsilon_x$-stationary point for the standard Moreau envelope of the primal function.
Optimization and Control 90C06, 90C25, 90C26, 91A99