Search Results for author: Rajiv Mathews

Found 30 papers, 5 papers with code

Learning from straggler clients in federated learning

no code implementations14 Mar 2024 Andrew Hard, Antonious M. Girgis, Ehsan Amid, Sean Augenstein, Lara McConnaughey, Rajiv Mathews, Rohan Anil

How well do existing federated learning algorithms learn from client devices that return model updates with a significant time delay?

Federated Learning

Unintended Memorization in Large ASR Models, and How to Mitigate It

no code implementations18 Oct 2023 Lun Wang, Om Thakkar, Rajiv Mathews

We empirically show that clipping each example's gradient can mitigate memorization for sped-up training examples with up to 16 repetitions in the training set.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Heterogeneous Federated Learning Using Knowledge Codistillation

no code implementations4 Oct 2023 Jared Lichtarge, Ehsan Amid, Shankar Kumar, Tien-Ju Yang, Rohan Anil, Rajiv Mathews

Federated Averaging, and many federated learning algorithm variants which build upon it, have a limitation: all clients must share the same model architecture.

Federated Learning Image Classification +2

Recycling Scraps: Improving Private Learning by Leveraging Intermediate Checkpoints

no code implementations4 Oct 2022 Virat Shejwalkar, Arun Ganesh, Rajiv Mathews, Om Thakkar, Abhradeep Thakurta

Empirically, we show that the last few checkpoints can provide a reasonable lower bound for the variance of a converged DP model.

Large vocabulary speech recognition for languages of Africa: multilingual modeling and self-supervised learning

no code implementations5 Aug 2022 Sandy Ritchie, You-Chi Cheng, Mingqing Chen, Rajiv Mathews, Daan van Esch, Bo Li, Khe Chai Sim

Almost none of the 2, 000+ languages spoken in Africa have widely available automatic speech recognition systems, and the required data is also only available for a few languages.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Mixed Federated Learning: Joint Decentralized and Centralized Learning

no code implementations26 May 2022 Sean Augenstein, Andrew Hard, Lin Ning, Karan Singhal, Satyen Kale, Kurt Partridge, Rajiv Mathews

For example, additional datacenter data can be leveraged to jointly learn from centralized (datacenter) and decentralized (federated) training data and better match an expected inference data distribution.

Federated Learning

Online Model Compression for Federated Learning with Large Models

no code implementations6 May 2022 Tien-Ju Yang, Yonghui Xiao, Giovanni Motta, Françoise Beaufays, Rajiv Mathews, Mingqing Chen

This paper addresses the challenges of training large neural network models under federated learning settings: high on-device memory usage and communication cost.

Federated Learning Model Compression +3

Detecting Unintended Memorization in Language-Model-Fused ASR

no code implementations20 Apr 2022 W. Ronny Huang, Steve Chien, Om Thakkar, Rajiv Mathews

End-to-end (E2E) models are often being accompanied by language models (LMs) via shallow fusion for boosting their overall quality as well as recognition of rare words.

Language Modelling Memorization

Extracting Targeted Training Data from ASR Models, and How to Mitigate It

no code implementations18 Apr 2022 Ehsan Amid, Om Thakkar, Arun Narayanan, Rajiv Mathews, Françoise Beaufays

We design Noise Masking, a fill-in-the-blank style method for extracting targeted parts of training data from trained ASR models.

Data Augmentation

Public Data-Assisted Mirror Descent for Private Model Training

no code implementations1 Dec 2021 Ehsan Amid, Arun Ganesh, Rajiv Mathews, Swaroop Ramaswamy, Shuang Song, Thomas Steinke, Vinith M. Suriyakumar, Om Thakkar, Abhradeep Thakurta

In this paper, we revisit the problem of using in-distribution public data to improve the privacy/utility trade-offs for differentially private (DP) model training.

Federated Learning

Jointly Learning from Decentralized (Federated) and Centralized Data to Mitigate Distribution Shift

no code implementations23 Nov 2021 Sean Augenstein, Andrew Hard, Kurt Partridge, Rajiv Mathews

With privacy as a motivation, Federated Learning (FL) is an increasingly used paradigm where learning takes place collectively on edge devices, each with a cache of user-generated training examples that remain resident on the local device.

Federated Learning

Revealing and Protecting Labels in Distributed Training

1 code implementation NeurIPS 2021 Trung Dang, Om Thakkar, Swaroop Ramaswamy, Rajiv Mathews, Peter Chin, Françoise Beaufays

Prior works have demonstrated that labels can be revealed analytically from the last layer of certain models (e. g., ResNet), or they can be reconstructed jointly with model inputs by using Gradients Matching [Zhu et al'19] with additional knowledge about the current state of the model.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Position-Invariant Truecasing with a Word-and-Character Hierarchical Recurrent Neural Network

no code implementations26 Aug 2021 Hao Zhang, You-Chi Cheng, Shankar Kumar, Mingqing Chen, Rajiv Mathews

Truecasing is the task of restoring the correct case (uppercase or lowercase) of noisy text generated either by an automatic system for speech recognition or machine translation or by humans.

Language Modelling Machine Translation +8

Communication-Efficient Agnostic Federated Averaging

no code implementations6 Apr 2021 Jae Ro, Mingqing Chen, Rajiv Mathews, Mehryar Mohri, Ananda Theertha Suresh

We propose a communication-efficient distributed algorithm called Agnostic Federated Averaging (or AgnosticFedAvg) to minimize the domain-agnostic objective proposed in Mohri et al. (2019), which is amenable to other private mechanisms such as secure aggregation.

Federated Learning Language Modelling

Training Production Language Models without Memorizing User Data

no code implementations21 Sep 2020 Swaroop Ramaswamy, Om Thakkar, Rajiv Mathews, Galen Andrew, H. Brendan McMahan, Françoise Beaufays

This paper presents the first consumer-scale next-word prediction (NWP) model trained with Federated Learning (FL) while leveraging the Differentially Private Federated Averaging (DP-FedAvg) technique.

Federated Learning Memorization

Understanding Unintended Memorization in Federated Learning

no code implementations12 Jun 2020 Om Thakkar, Swaroop Ramaswamy, Rajiv Mathews, Françoise Beaufays

In this paper, we initiate a formal study to understand the effect of different components of canonical FL on unintended memorization in trained models, comparing with the central learning setting.

Clustering Federated Learning +1

Training Keyword Spotting Models on Non-IID Data with Federated Learning

no code implementations21 May 2020 Andrew Hard, Kurt Partridge, Cameron Nguyen, Niranjan Subrahmanya, Aishanee Shah, Pai Zhu, Ignacio Lopez Moreno, Rajiv Mathews

We demonstrate that a production-quality keyword-spotting model can be trained on-device using federated learning and achieve comparable false accept and false reject rates to a centrally-trained model.

Data Augmentation Federated Learning +1

Generative Models for Effective ML on Private, Decentralized Datasets

3 code implementations ICLR 2020 Sean Augenstein, H. Brendan McMahan, Daniel Ramage, Swaroop Ramaswamy, Peter Kairouz, Mingqing Chen, Rajiv Mathews, Blaise Aguera y Arcas

To improve real-world applications of machine learning, experienced modelers develop intuition about their datasets, their models, and how the two interact.

Federated Learning

Federated Evaluation of On-device Personalization

1 code implementation22 Oct 2019 Kangkang Wang, Rajiv Mathews, Chloé Kiddon, Hubert Eichner, Françoise Beaufays, Daniel Ramage

Federated learning is a distributed, on-device computation framework that enables training global models without exporting sensitive user data to servers.

Language Modelling

Federated Learning of N-gram Language Models

no code implementations CONLL 2019 Mingqing Chen, Ananda Theertha Suresh, Rajiv Mathews, Adeline Wong, Cyril Allauzen, Françoise Beaufays, Michael Riley

The n-gram language models trained with federated learning are compared to n-grams trained with traditional server-based algorithms using A/B tests on tens of millions of users of virtual keyboard.

Federated Learning Language Modelling

Federated Learning Of Out-Of-Vocabulary Words

no code implementations26 Mar 2019 Mingqing Chen, Rajiv Mathews, Tom Ouyang, Françoise Beaufays

We demonstrate that a character-level recurrent neural network is able to learn out-of-vocabulary (OOV) words under federated learning settings, for the purpose of expanding the vocabulary of a virtual keyboard for smartphones without exporting sensitive text to servers.

Federated Learning

Federated Learning for Mobile Keyboard Prediction

5 code implementations8 Nov 2018 Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Françoise Beaufays, Sean Augenstein, Hubert Eichner, Chloé Kiddon, Daniel Ramage

We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones.

Federated Learning Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.