Search Results for author: Michael Menart

Found 6 papers, 0 papers with code

Differentially Private Non-Convex Optimization under the KL Condition with Optimal Rates

no code implementations22 Nov 2023 Michael Menart, Enayat Ullah, Raman Arora, Raef Bassily, Cristóbal Guzmán

We further show that, without assuming the KL condition, the same gradient descent algorithm can achieve fast convergence to a stationary point when the gradient stays sufficiently large during the run of the algorithm.

Differentially Private Algorithms for the Stochastic Saddle Point Problem with Optimal Rates for the Strong Gap

no code implementations24 Feb 2023 Raef Bassily, Cristóbal Guzmán, Michael Menart

We show that convex-concave Lipschitz stochastic saddle point problems (also known as stochastic minimax optimization) can be solved under the constraint of $(\epsilon,\delta)$-differential privacy with \emph{strong (primal-dual) gap} rate of $\tilde O\big(\frac{1}{\sqrt{n}} + \frac{\sqrt{d}}{n\epsilon}\big)$, where $n$ is the dataset size and $d$ is the dimension of the problem.

Stochastic Optimization

Faster Rates of Convergence to Stationary Points in Differentially Private Optimization

no code implementations2 Jun 2022 Raman Arora, Raef Bassily, Tomás González, Cristóbal Guzmán, Michael Menart, Enayat Ullah

We provide a new efficient algorithm that finds an $\tilde{O}\big(\big[\frac{\sqrt{d}}{n\varepsilon}\big]^{2/3}\big)$-stationary point in the finite-sum setting, where $n$ is the number of samples.

Stochastic Optimization

Differentially Private Generalized Linear Models Revisited

no code implementations6 May 2022 Raman Arora, Raef Bassily, Cristóbal Guzmán, Michael Menart, Enayat Ullah

For this case, we close the gap in the existing work and show that the optimal rate is (up to log factors) $\Theta\left(\frac{\Vert w^*\Vert}{\sqrt{n}} + \min\left\{\frac{\Vert w^*\Vert}{\sqrt{n\epsilon}},\frac{\sqrt{\text{rank}}\Vert w^*\Vert}{n\epsilon}\right\}\right)$, where $\text{rank}$ is the rank of the design matrix.

Model Selection

Differentially Private Stochastic Optimization: New Results in Convex and Non-Convex Settings

no code implementations NeurIPS 2021 Raef Bassily, Cristóbal Guzmán, Michael Menart

For the $\ell_1$-case with smooth losses and polyhedral constraint, we provide the first nearly dimension independent rate, $\tilde O\big(\frac{\log^{2/3}{d}}{{(n\varepsilon)^{1/3}}}\big)$ in linear time.

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.