Search Results for author: Françoise Beaufays

Found 32 papers, 5 papers with code

Federated Pruning: Improving Neural Network Efficiency with Federated Learning

no code implementations14 Sep 2022 Rongmei Lin, Yonghui Xiao, Tien-Ju Yang, Ding Zhao, Li Xiong, Giovanni Motta, Françoise Beaufays

Automatic Speech Recognition models require large amount of speech data for training, and the collection of such data often leads to privacy concerns.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Online Model Compression for Federated Learning with Large Models

no code implementations6 May 2022 Tien-Ju Yang, Yonghui Xiao, Giovanni Motta, Françoise Beaufays, Rajiv Mathews, Mingqing Chen

This paper addresses the challenges of training large neural network models under federated learning settings: high on-device memory usage and communication cost.

Federated Learning Model Compression +3

Extracting Targeted Training Data from ASR Models, and How to Mitigate It

no code implementations18 Apr 2022 Ehsan Amid, Om Thakkar, Arun Narayanan, Rajiv Mathews, Françoise Beaufays

We design Noise Masking, a fill-in-the-blank style method for extracting targeted parts of training data from trained ASR models.

Data Augmentation

Handling Compounding in Mobile Keyboard Input

no code implementations17 Jan 2022 Andreas Kabel, Keith Hall, Tom Ouyang, David Rybach, Daan van Esch, Françoise Beaufays

This paper proposes a framework to improve the typing experience of mobile users in morphologically rich languages.

Revealing and Protecting Labels in Distributed Training

1 code implementation NeurIPS 2021 Trung Dang, Om Thakkar, Swaroop Ramaswamy, Rajiv Mathews, Peter Chin, Françoise Beaufays

Prior works have demonstrated that labels can be revealed analytically from the last layer of certain models (e. g., ResNet), or they can be reconstructed jointly with model inputs by using Gradients Matching [Zhu et al'19] with additional knowledge about the current state of the model.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Partial Variable Training for Efficient On-Device Federated Learning

no code implementations11 Oct 2021 Tien-Ju Yang, Dhruv Guliani, Françoise Beaufays, Giovanni Motta

This paper aims to address the major challenges of Federated Learning (FL) on edge devices: limited memory and expensive communication.

Federated Learning speech-recognition +1

Exploring Heterogeneous Characteristics of Layers in ASR Models for More Efficient Training

no code implementations8 Oct 2021 Lillian Zhou, Dhruv Guliani, Andreas Kabel, Giovanni Motta, Françoise Beaufays

Transformer-based architectures have been the subject of research aimed at understanding their overparameterization and the non-uniform importance of their layers.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Fast Contextual Adaptation with Neural Associative Memory for On-Device Personalized Speech Recognition

no code implementations5 Oct 2021 Tsendsuren Munkhdalai, Khe Chai Sim, Angad Chandorkar, Fan Gao, Mason Chua, Trevor Strohman, Françoise Beaufays

Fast contextual adaptation has shown to be effective in improving Automatic Speech Recognition (ASR) of rare words and when combined with an on-device personalized training, it can yield an even better recognition result.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Large-scale ASR Domain Adaptation using Self- and Semi-supervised Learning

no code implementations1 Oct 2021 Dongseong Hwang, Ananya Misra, Zhouyuan Huo, Nikhil Siddhartha, Shefali Garg, David Qiu, Khe Chai Sim, Trevor Strohman, Françoise Beaufays, Yanzhang He

Self- and semi-supervised learning methods have been actively investigated to reduce labeled training data or enhance the model performance.

Domain Adaptation

On-Device Personalization of Automatic Speech Recognition Models for Disordered Speech

no code implementations18 Jun 2021 Katrin Tomanek, Françoise Beaufays, Julie Cattiau, Angad Chandorkar, Khe Chai Sim

While current state-of-the-art Automatic Speech Recognition (ASR) systems achieve high accuracy on typical speech, they suffer from significant performance degradation on disordered speech and other atypical speech patterns.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Training Production Language Models without Memorizing User Data

no code implementations21 Sep 2020 Swaroop Ramaswamy, Om Thakkar, Rajiv Mathews, Galen Andrew, H. Brendan McMahan, Françoise Beaufays

This paper presents the first consumer-scale next-word prediction (NWP) model trained with Federated Learning (FL) while leveraging the Differentially Private Federated Averaging (DP-FedAvg) technique.

Federated Learning Memorization

Understanding Unintended Memorization in Federated Learning

no code implementations12 Jun 2020 Om Thakkar, Swaroop Ramaswamy, Rajiv Mathews, Françoise Beaufays

In this paper, we initiate a formal study to understand the effect of different components of canonical FL on unintended memorization in trained models, comparing with the central learning setting.

Clustering Federated Learning +1

Writing Across the World's Languages: Deep Internationalization for Gboard, the Google Keyboard

no code implementations3 Dec 2019 Daan van Esch, Elnaz Sarbar, Tamar Lucassen, Jeremy O'Brien, Theresa Breiner, Manasa Prasad, Evan Crew, Chieu Nguyen, Françoise Beaufays

Today, Gboard supports 900+ language varieties across 70+ writing systems, and this report describes how and why we have been adding support for hundreds of language varieties from around the globe.

Federated Evaluation of On-device Personalization

1 code implementation22 Oct 2019 Kangkang Wang, Rajiv Mathews, Chloé Kiddon, Hubert Eichner, Françoise Beaufays, Daniel Ramage

Federated learning is a distributed, on-device computation framework that enables training global models without exporting sensitive user data to servers.

Language Modelling

Federated Learning of N-gram Language Models

no code implementations CONLL 2019 Mingqing Chen, Ananda Theertha Suresh, Rajiv Mathews, Adeline Wong, Cyril Allauzen, Françoise Beaufays, Michael Riley

The n-gram language models trained with federated learning are compared to n-grams trained with traditional server-based algorithms using A/B tests on tens of millions of users of virtual keyboard.

Federated Learning Language Modelling

An Investigation Into On-device Personalization of End-to-end Automatic Speech Recognition Models

no code implementations14 Sep 2019 Khe Chai Sim, Petr Zadrazil, Françoise Beaufays

Speaker-independent speech recognition systems trained with data from many users are generally robust against speaker variability and work well for a large population of speakers.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Federated Learning Of Out-Of-Vocabulary Words

no code implementations26 Mar 2019 Mingqing Chen, Rajiv Mathews, Tom Ouyang, Françoise Beaufays

We demonstrate that a character-level recurrent neural network is able to learn out-of-vocabulary (OOV) words under federated learning settings, for the purpose of expanding the vocabulary of a virtual keyboard for smartphones without exporting sensitive text to servers.

Federated Learning

Federated Learning for Mobile Keyboard Prediction

5 code implementations8 Nov 2018 Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Françoise Beaufays, Sean Augenstein, Hubert Eichner, Chloé Kiddon, Daniel Ramage

We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones.

Federated Learning Language Modelling

Mobile Keyboard Input Decoding with Finite-State Transducers

no code implementations13 Apr 2017 Tom Ouyang, David Rybach, Françoise Beaufays, Michael Riley

We describe the general framework of what we call for short the keyboard "FST decoder" as well as the implementation details that are new compared to a speech FST decoder.

speech-recognition Speech Recognition

Fast and Accurate Recurrent Neural Network Acoustic Models for Speech Recognition

no code implementations24 Jul 2015 Haşim Sak, Andrew Senior, Kanishka Rao, Françoise Beaufays

We have recently shown that deep Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) outperform feed forward deep neural networks (DNNs) as acoustic models for speech recognition.

General Classification speech-recognition +1

Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition

no code implementations5 Feb 2014 Haşim Sak, Andrew Senior, Françoise Beaufays

However, in contrast to the deep neural networks, the use of RNNs in speech recognition has been limited to phone recognition in small scale tasks.

Handwriting Recognition Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.