Search Results for author: Daniel Ramage

Found 15 papers, 9 papers with code

Communication-Efficient Learning of Deep Networks from Decentralized Data

31 code implementations17 Feb 2016 H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Agüera y Arcas

Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device.

Federated Learning Speech Recognition

Discrete Distribution Estimation under Local Privacy

no code implementations24 Feb 2016 Peter Kairouz, Keith Bonawitz, Daniel Ramage

The collection and analysis of user data drives improvements in the app and web ecosystems, but comes with risks to privacy.

Practical Secure Aggregation for Federated Learning on User-Held Data

no code implementations14 Nov 2016 Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H. Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, Karn Seth

Secure Aggregation protocols allow a collection of mutually distrust parties, each holding a private value, to collaboratively compute the sum of those values without revealing the values themselves.

Federated Learning

Learning Differentially Private Recurrent Language Models

1 code implementation ICLR 2018 H. Brendan McMahan, Daniel Ramage, Kunal Talwar, Li Zhang

We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy.

Federated Learning for Mobile Keyboard Prediction

5 code implementations8 Nov 2018 Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Françoise Beaufays, Sean Augenstein, Hubert Eichner, Chloé Kiddon, Daniel Ramage

We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones.

Federated Learning Language Modelling

Federated Evaluation of On-device Personalization

1 code implementation22 Oct 2019 Kangkang Wang, Rajiv Mathews, Chloé Kiddon, Hubert Eichner, Françoise Beaufays, Daniel Ramage

Federated learning is a distributed, on-device computation framework that enables training global models without exporting sensitive user data to servers.

Language Modelling

Context-Aware Local Differential Privacy

no code implementations31 Oct 2019 Jayadev Acharya, Keith Bonawitz, Peter Kairouz, Daniel Ramage, Ziteng Sun

Local differential privacy (LDP) is a strong notion of privacy for individual users that often comes at the expense of a significant drop in utility.

Generative Models for Effective ML on Private, Decentralized Datasets

3 code implementations ICLR 2020 Sean Augenstein, H. Brendan McMahan, Daniel Ramage, Swaroop Ramaswamy, Peter Kairouz, Mingqing Chen, Rajiv Mathews, Blaise Aguera y Arcas

To improve real-world applications of machine learning, experienced modelers develop intuition about their datasets, their models, and how the two interact.

Federated Learning

Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning

1 code implementation23 Aug 2021 Virat Shejwalkar, Amir Houmansadr, Peter Kairouz, Daniel Ramage

While recent works have indicated that federated learning (FL) may be vulnerable to poisoning attacks by compromised clients, their real impact on production FL systems is not fully understood.

Federated Learning Misconceptions +1

Context Aware Local Differential Privacy

no code implementations ICML 2020 Jayadev Acharya, Kallista Bonawitz, Peter Kairouz, Daniel Ramage, Ziteng Sun

The original definition of LDP assumes that all the elements in the data domain are equally sensitive.

Cannot find the paper you are looking for? You can Submit a new open access paper.