Search Results for author: Swaroop Ramaswamy

Found 11 papers, 5 papers with code

Public Data-Assisted Mirror Descent for Private Model Training

no code implementations1 Dec 2021 Ehsan Amid, Arun Ganesh, Rajiv Mathews, Swaroop Ramaswamy, Shuang Song, Thomas Steinke, Vinith M. Suriyakumar, Om Thakkar, Abhradeep Thakurta

In this paper, we revisit the problem of using in-distribution public data to improve the privacy/utility trade-offs for differentially private (DP) model training.

Federated Learning

Revealing and Protecting Labels in Distributed Training

1 code implementation NeurIPS 2021 Trung Dang, Om Thakkar, Swaroop Ramaswamy, Rajiv Mathews, Peter Chin, Françoise Beaufays

Prior works have demonstrated that labels can be revealed analytically from the last layer of certain models (e. g., ResNet), or they can be reconstructed jointly with model inputs by using Gradients Matching [Zhu et al'19] with additional knowledge about the current state of the model.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Training Production Language Models without Memorizing User Data

no code implementations21 Sep 2020 Swaroop Ramaswamy, Om Thakkar, Rajiv Mathews, Galen Andrew, H. Brendan McMahan, Françoise Beaufays

This paper presents the first consumer-scale next-word prediction (NWP) model trained with Federated Learning (FL) while leveraging the Differentially Private Federated Averaging (DP-FedAvg) technique.

Federated Learning Memorization

Understanding Unintended Memorization in Federated Learning

no code implementations12 Jun 2020 Om Thakkar, Swaroop Ramaswamy, Rajiv Mathews, Françoise Beaufays

In this paper, we initiate a formal study to understand the effect of different components of canonical FL on unintended memorization in trained models, comparing with the central learning setting.

Clustering Federated Learning +1

Generative Models for Effective ML on Private, Decentralized Datasets

3 code implementations ICLR 2020 Sean Augenstein, H. Brendan McMahan, Daniel Ramage, Swaroop Ramaswamy, Peter Kairouz, Mingqing Chen, Rajiv Mathews, Blaise Aguera y Arcas

To improve real-world applications of machine learning, experienced modelers develop intuition about their datasets, their models, and how the two interact.

Federated Learning

Differentially Private Learning with Adaptive Clipping

1 code implementation NeurIPS 2021 Galen Andrew, Om Thakkar, H. Brendan McMahan, Swaroop Ramaswamy

Existing approaches for training neural networks with user-level differential privacy (e. g., DP Federated Averaging) in federated learning (FL) settings involve bounding the contribution of each user's model update by clipping it to some constant value.

Federated Learning

Federated Learning for Mobile Keyboard Prediction

5 code implementations8 Nov 2018 Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Françoise Beaufays, Sean Augenstein, Hubert Eichner, Chloé Kiddon, Daniel Ramage

We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones.

Federated Learning Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.