Search Results for author: Enayat Ullah

Found 14 papers, 3 papers with code

Communication-Efficient Federated Learning with Sketching

no code implementations ICML 2020 Daniel Rothchild, Ashwinee Panda, Enayat Ullah, Nikita Ivkin, Vladimir Braverman, Joseph Gonzalez, Ion Stoica, Raman Arora

A key insight in the design of FedSketchedSGD is that, because the Count Sketch is linear, momentum and error accumulation can both be carried out within the sketch.

Federated Learning

Differentially Private Non-Convex Optimization under the KL Condition with Optimal Rates

no code implementations22 Nov 2023 Michael Menart, Enayat Ullah, Raman Arora, Raef Bassily, Cristóbal Guzmán

We further show that, without assuming the KL condition, the same gradient descent algorithm can achieve fast convergence to a stationary point when the gradient stays sufficiently large during the run of the algorithm.

Private Federated Learning with Autotuned Compression

1 code implementation20 Jul 2023 Enayat Ullah, Christopher A. Choquette-Choo, Peter Kairouz, Sewoong Oh

We propose new techniques for reducing communication in private federated learning without the need for setting or tuning compression rates.

Federated Learning

From Adaptive Query Release to Machine Unlearning

no code implementations20 Jul 2023 Enayat Ullah, Raman Arora

We give efficient unlearning algorithms for linear and prefix-sum query classes.

Machine Unlearning

Adversarial Robustness is at Odds with Lazy Training

no code implementations18 Jun 2022 Yunjuan Wang, Enayat Ullah, Poorya Mianjy, Raman Arora

Recent works show that adversarial examples exist for random neural networks [Daniely and Schacham, 2020] and that these examples can be found using a single step of gradient ascent [Bubeck et al., 2021].

Adversarial Robustness Learning Theory

Faster Rates of Convergence to Stationary Points in Differentially Private Optimization

no code implementations2 Jun 2022 Raman Arora, Raef Bassily, Tomás González, Cristóbal Guzmán, Michael Menart, Enayat Ullah

We provide a new efficient algorithm that finds an $\tilde{O}\big(\big[\frac{\sqrt{d}}{n\varepsilon}\big]^{2/3}\big)$-stationary point in the finite-sum setting, where $n$ is the number of samples.

Stochastic Optimization

Differentially Private Generalized Linear Models Revisited

no code implementations6 May 2022 Raman Arora, Raef Bassily, Cristóbal Guzmán, Michael Menart, Enayat Ullah

For this case, we close the gap in the existing work and show that the optimal rate is (up to log factors) $\Theta\left(\frac{\Vert w^*\Vert}{\sqrt{n}} + \min\left\{\frac{\Vert w^*\Vert}{\sqrt{n\epsilon}},\frac{\sqrt{\text{rank}}\Vert w^*\Vert}{n\epsilon}\right\}\right)$, where $\text{rank}$ is the rank of the design matrix.

Model Selection

Machine Unlearning via Algorithmic Stability

no code implementations25 Feb 2021 Enayat Ullah, Tung Mai, Anup Rao, Ryan Rossi, Raman Arora

Our key contribution is the design of corresponding efficient unlearning algorithms, which are based on constructing a (maximal) coupling of Markov chains for the noisy SGD procedure.

Machine Unlearning

FetchSGD: Communication-Efficient Federated Learning with Sketching

no code implementations15 Jul 2020 Daniel Rothchild, Ashwinee Panda, Enayat Ullah, Nikita Ivkin, Ion Stoica, Vladimir Braverman, Joseph Gonzalez, Raman Arora

A key insight in the design of FetchSGD is that, because the Count Sketch is linear, momentum and error accumulation can both be carried out within the sketch.

Federated Learning

Communication-efficient distributed SGD with Sketching

2 code implementations NeurIPS 2019 Nikita Ivkin, Daniel Rothchild, Enayat Ullah, Vladimir Braverman, Ion Stoica, Raman Arora

Large-scale distributed training of neural networks is often limited by network bandwidth, wherein the communication time overwhelms the local computation time.

Streaming Kernel PCA with $\tilde{O}(\sqrt{n})$ Random Features

1 code implementation2 Aug 2018 Enayat Ullah, Poorya Mianjy, Teodor V. Marinov, Raman Arora

We study the statistical and computational aspects of kernel principal component analysis using random Fourier features and show that under mild assumptions, $O(\sqrt{n} \log n)$ features suffices to achieve $O(1/\epsilon^2)$ sample complexity.

Cannot find the paper you are looking for? You can Submit a new open access paper.