Search Results for author: Andrew Lowy

Found 11 papers, 5 papers with code

How to Make the Gradients Small Privately: Improved Rates for Differentially Private Non-Convex Optimization

no code implementations17 Feb 2024 Andrew Lowy, Jonathan Ullman, Stephen J. Wright

We use this framework to obtain improved, and sometimes optimal, rates for several classes of non-convex loss functions.

Why Does Differential Privacy with Large Epsilon Defend Against Practical Membership Inference Attacks?

no code implementations14 Feb 2024 Andrew Lowy, Zhuohang Li, Jing Liu, Toshiaki Koike-Akino, Kieran Parsons, Ye Wang

In practical applications, such a worst-case guarantee may be overkill: practical attackers may lack exact knowledge of (nearly all of) the private data, and our data set might be easier to defend, in some sense, than the worst-case data set.

Inference Attack Membership Inference Attack

Optimal Differentially Private Model Training with Public Data

1 code implementation26 Jun 2023 Andrew Lowy, Zeman Li, Tianjian Huang, Meisam Razaviyayn

We show that the optimal error rates can be attained (up to log factors) by either discarding private data and training a public model, or treating public data like it is private and using an optimal DP algorithm.

Stochastic Differentially Private and Fair Learning

1 code implementation17 Oct 2022 Andrew Lowy, Devansh Gupta, Meisam Razaviyayn

However, existing algorithms for DP fair learning are either not guaranteed to converge or require full batch of data in each iteration of the algorithm to converge.

Binary Classification Decision Making +2

Private Stochastic Optimization With Large Worst-Case Lipschitz Parameter: Optimal Rates for (Non-Smooth) Convex Losses and Extension to Non-Convex Losses

no code implementations15 Sep 2022 Andrew Lowy, Meisam Razaviyayn

To address these limitations, this work provides near-optimal excess risk bounds that do not depend on the uniform Lipschitz parameter of the loss.

Stochastic Optimization

Private Non-Convex Federated Learning Without a Trusted Server

1 code implementation13 Mar 2022 Andrew Lowy, Ali Ghafelebashi, Meisam Razaviyayn

silo data and two classes of Lipschitz continuous loss functions: First, we consider losses satisfying the Proximal Polyak-Lojasiewicz (PL) inequality, which is an extension of the classical PL condition to the constrained setting.

Federated Learning

Private Federated Learning Without a Trusted Server: Optimal Algorithms for Convex Losses

2 code implementations17 Jun 2021 Andrew Lowy, Meisam Razaviyayn

This paper studies federated learning (FL)--especially cross-silo FL--with data from people who do not trust the server or other silos.

Federated Learning Stochastic Optimization

A Stochastic Optimization Framework for Fair Risk Minimization

1 code implementation NeurIPS 2021 Andrew Lowy, Sina Baharlouei, Rakesh Pavan, Meisam Razaviyayn, Ahmad Beirami

We consider the problem of fair classification with discrete sensitive attributes and potentially large models and data sets, requiring stochastic solvers.

Binary Classification Fairness +1

Fair Empirical Risk Minimization via Exponential Rényi Mutual Information

no code implementations1 Jan 2021 Rakesh Pavan, Andrew Lowy, Sina Baharlouei, Meisam Razaviyayn, Ahmad Beirami

In this paper, we propose another notion of fairness violation, called Exponential Rényi Mutual Information (ERMI) between sensitive attributes and the predicted target.

Attribute Fairness +1

Efficient Search of First-Order Nash Equilibria in Nonconvex-Concave Smooth Min-Max Problems

no code implementations18 Feb 2020 Dmitrii M. Ostrovskii, Andrew Lowy, Meisam Razaviyayn

As a byproduct, the choice $\varepsilon_y = O(\varepsilon_x{}^2)$ allows for the $O(\varepsilon_x{}^{-3})$ complexity of finding an $\varepsilon_x$-stationary point for the standard Moreau envelope of the primal function.

Optimization and Control 90C06, 90C25, 90C26, 91A99

Cannot find the paper you are looking for? You can Submit a new open access paper.