1 code implementation • 11 Mar 2024 • Keith Rush, Zachary Charles, Zachary Garrett
We show that FAX provides an easily programmable, performant, and scalable framework for federated computations in the data center.
no code implementations • NeurIPS 2023 • Anastasia Koloskova, Ryan McKenna, Zachary Charles, Keith Rush, Brendan Mcmahan
We propose a simplified setting that distills key facets of these methods and isolates the impact of linearly correlated noise.
no code implementations • 18 Jan 2023 • Keith Rush, Zachary Charles, Zachary Garrett
We propose a federated automatic differentiation (FAD) framework that 1) enables computing derivatives of functions involving client and server computation as well as communication between them and 2) operates in a manner compatible with existing federated technology.
1 code implementation • 12 Nov 2022 • Christopher A. Choquette-Choo, H. Brendan McMahan, Keith Rush, Abhradeep Thakurta
We formalize the problem of DP mechanisms for adaptive streams with multiple participations and introduce a non-trivial extension of online matrix factorization DP mechanisms to our setting.
1 code implementation • 16 Feb 2022 • Sergey Denisov, Brendan Mcmahan, Keith Rush, Adam Smith, Abhradeep Guha Thakurta
Motivated by recent applications requiring differential privacy over adaptive streams, we investigate the question of optimal instantiations of the matrix mechanism in this setting.
no code implementations • 8 Sep 2021 • Zachary Charles, Keith Rush
In the context of federated learning, we show that when clients have loss functions whose gradients satisfy this condition, federated averaging is equivalent to gradient descent on a surrogate loss function.
3 code implementations • NeurIPS 2021 • Karan Singhal, Hakim Sidahmed, Zachary Garrett, Shanshan Wu, Keith Rush, Sushant Prakash
We also describe the successful deployment of this approach at scale for federated collaborative filtering in a mobile keyboard application.
no code implementations • 14 Aug 2020 • Peter Kairouz, Mónica Ribero, Keith Rush, Abhradeep Thakurta
In particular, we show that if the gradients lie in a known constant rank subspace, and assuming algorithmic access to an envelope which bounds decaying sensitivity, one can achieve faster convergence to an excess empirical risk of $\tilde O(1/\epsilon n)$, where $\epsilon$ is the privacy budget and $n$ the number of samples.
5 code implementations • ICLR 2021 • Sashank Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, H. Brendan McMahan
Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model without sharing their own training data.
2 code implementations • 27 Sep 2019 • Yihan Jiang, Jakub Konečný, Keith Rush, Sreeram Kannan
We present FL as a natural source of practical applications for MAML algorithms, and make the following observations.