Search Results for author: Keith Rush

Found 7 papers, 4 papers with code

Multi-Epoch Matrix Factorization Mechanisms for Private Machine Learning

no code implementations12 Nov 2022 Christopher A. Choquette-Choo, H. Brendan McMahan, Keith Rush, Abhradeep Thakurta

Our key contribution is an extension of the online matrix factorization DP mechanism to multiple participations, substantially generalizing the approach of DMRST2022.

Image Classification Language Modelling

Improved Differential Privacy for SGD via Optimal Private Linear Operators on Adaptive Streams

1 code implementation16 Feb 2022 Sergey Denisov, Brendan Mcmahan, Keith Rush, Adam Smith, Abhradeep Guha Thakurta

Motivated by recent applications requiring differential privacy over adaptive streams, we investigate the question of optimal instantiations of the matrix mechanism in this setting.

Federated Learning

Iterated Vector Fields and Conservatism, with Applications to Federated Learning

no code implementations8 Sep 2021 Zachary Charles, Keith Rush

In the context of federated learning, we show that when clients have loss functions whose gradients satisfy this condition, federated averaging is equivalent to gradient descent on a surrogate loss function.

Federated Learning

Federated Reconstruction: Partially Local Federated Learning

3 code implementations NeurIPS 2021 Karan Singhal, Hakim Sidahmed, Zachary Garrett, Shanshan Wu, Keith Rush, Sushant Prakash

We also describe the successful deployment of this approach at scale for federated collaborative filtering in a mobile keyboard application.

Collaborative Filtering Federated Learning +1

Fast Dimension Independent Private AdaGrad on Publicly Estimated Subspaces

no code implementations14 Aug 2020 Peter Kairouz, Mónica Ribero, Keith Rush, Abhradeep Thakurta

In particular, we show that if the gradients lie in a known constant rank subspace, and assuming algorithmic access to an envelope which bounds decaying sensitivity, one can achieve faster convergence to an excess empirical risk of $\tilde O(1/\epsilon n)$, where $\epsilon$ is the privacy budget and $n$ the number of samples.

Adaptive Federated Optimization

3 code implementations ICLR 2021 Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, H. Brendan McMahan

Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model without sharing their own training data.

Federated Learning

Improving Federated Learning Personalization via Model Agnostic Meta Learning

2 code implementations27 Sep 2019 Yihan Jiang, Jakub Konečný, Keith Rush, Sreeram Kannan

We present FL as a natural source of practical applications for MAML algorithms, and make the following observations.

Federated Learning Meta-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.