no code implementations • ICML 2020 • Jayadev Acharya, Kallista Bonawitz, Peter Kairouz, Daniel Ramage, Ziteng Sun
The original definition of LDP assumes that all the elements in the data domain are equally sensitive.
no code implementations • 11 Oct 2024 • Katharine Daly, Hubert Eichner, Peter Kairouz, H. Brendan McMahan, Daniel Ramage, Zheng Xu
Federated Learning (FL) is a machine learning technique that enables multiple entities to collaboratively learn a shared model without exchanging their local data.
no code implementations • 8 May 2024 • Eugene Bagdasarian, Ren Yi, Sahra Ghalebikesabi, Peter Kairouz, Marco Gruteser, Sewoong Oh, Borja Balle, Daniel Ramage
The growing use of large language model (LLM)-based conversational agents to manage sensitive user data raises significant privacy concerns.
3 code implementations • 16 Apr 2024 • Hubert Eichner, Daniel Ramage, Kallista Bonawitz, Dzmitry Huba, Tiziano Santoro, Brett McLarnon, Timon Van Overveldt, Nova Fallen, Peter Kairouz, Albert Cheu, Katharine Daly, Adria Gascon, Marco Gruteser, Brendan Mcmahan
Federated Learning and Analytics (FLA) have seen widespread adoption by technology platforms for processing sensitive on-device data.
no code implementations • 5 Apr 2024 • Shanshan Wu, Zheng Xu, Yanxiang Zhang, Yuanbo Zhang, Daniel Ramage
Pre-training on public data is an effective method to improve the performance for federated learning (FL) with differential privacy (DP).
1 code implementation • 23 Aug 2021 • Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, Daniel Ramage
While recent works have indicated that federated learning (FL) may be vulnerable to poisoning attacks by compromised clients, their real impact on production FL systems is not fully understood.
9 code implementations • 10 Dec 2019 • Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Hubert Eichner, Salim El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, Sen Zhao
FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches.
3 code implementations • ICLR 2020 • Sean Augenstein, H. Brendan McMahan, Daniel Ramage, Swaroop Ramaswamy, Peter Kairouz, Mingqing Chen, Rajiv Mathews, Blaise Aguera y Arcas
To improve real-world applications of machine learning, experienced modelers develop intuition about their datasets, their models, and how the two interact.
no code implementations • 31 Oct 2019 • Jayadev Acharya, Keith Bonawitz, Peter Kairouz, Daniel Ramage, Ziteng Sun
Local differential privacy (LDP) is a strong notion of privacy for individual users that often comes at the expense of a significant drop in utility.
1 code implementation • 22 Oct 2019 • Kangkang Wang, Rajiv Mathews, Chloé Kiddon, Hubert Eichner, Françoise Beaufays, Daniel Ramage
Federated learning is a distributed, on-device computation framework that enables training global models without exporting sensitive user data to servers.
7 code implementations • 4 Feb 2019 • Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konečný, Stefano Mazzocchi, H. Brendan McMahan, Timon Van Overveldt, David Petrou, Daniel Ramage, Jason Roselander
Federated Learning is a distributed machine learning approach which enables model training on a large corpus of decentralized data.
1 code implementation • 7 Dec 2018 • Timothy Yang, Galen Andrew, Hubert Eichner, Haicheng Sun, Wei Li, Nicholas Kong, Daniel Ramage, Françoise Beaufays
Federated learning is a distributed form of machine learning where both the training data and model training are decentralized.
6 code implementations • 8 Nov 2018 • Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Françoise Beaufays, Sean Augenstein, Hubert Eichner, Chloé Kiddon, Daniel Ramage
We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones.
1 code implementation • ICLR 2018 • H. Brendan McMahan, Daniel Ramage, Kunal Talwar, Li Zhang
We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy.
no code implementations • 14 Nov 2016 • Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, Karn Seth
Secure Aggregation protocols allow a collection of mutually distrust parties, each holding a private value, to collaboratively compute the sum of those values without revealing the values themselves.
no code implementations • 8 Oct 2016 • Jakub Konečný, H. Brendan McMahan, Daniel Ramage, Peter Richtárik
We refer to this setting as Federated Optimization.
no code implementations • 24 Feb 2016 • Peter Kairouz, Keith Bonawitz, Daniel Ramage
The collection and analysis of user data drives improvements in the app and web ecosystems, but comes with risks to privacy.
35 code implementations • 17 Feb 2016 • H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Agüera y Arcas
Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device.
no code implementations • 11 Nov 2015 • Jakub Konečný, Brendan Mcmahan, Daniel Ramage
We refer to this setting as Federated Optimization.