Search Results for author: Om Thakkar

Found 21 papers, 5 papers with code

Noise Masking Attacks and Defenses for Pretrained Speech Models

no code implementations2 Apr 2024 Matthew Jagielski, Om Thakkar, Lun Wang

Our method fine-tunes the encoder to produce an ASR model, and then performs noise masking on this model, which we find recovers private information from the pretraining data, despite the model never having seen transcripts at pretraining time!

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Unintended Memorization in Large ASR Models, and How to Mitigate It

no code implementations18 Oct 2023 Lun Wang, Om Thakkar, Rajiv Mathews

We empirically show that clipping each example's gradient can mitigate memorization for sped-up training examples with up to 16 repetitions in the training set.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Why Is Public Pretraining Necessary for Private Model Training?

no code implementations19 Feb 2023 Arun Ganesh, Mahdi Haghifam, Milad Nasr, Sewoong Oh, Thomas Steinke, Om Thakkar, Abhradeep Thakurta, Lun Wang

To explain this phenomenon, we hypothesize that the non-convex loss landscape of a model training necessitates an optimization algorithm to go through two phases.

Transfer Learning

Recycling Scraps: Improving Private Learning by Leveraging Intermediate Checkpoints

no code implementations4 Oct 2022 Virat Shejwalkar, Arun Ganesh, Rajiv Mathews, Om Thakkar, Abhradeep Thakurta

Empirically, we show that the last few checkpoints can provide a reasonable lower bound for the variance of a converged DP model.

Detecting Unintended Memorization in Language-Model-Fused ASR

no code implementations20 Apr 2022 W. Ronny Huang, Steve Chien, Om Thakkar, Rajiv Mathews

End-to-end (E2E) models are often being accompanied by language models (LMs) via shallow fusion for boosting their overall quality as well as recognition of rare words.

Language Modelling Memorization

Extracting Targeted Training Data from ASR Models, and How to Mitigate It

no code implementations18 Apr 2022 Ehsan Amid, Om Thakkar, Arun Narayanan, Rajiv Mathews, Françoise Beaufays

We design Noise Masking, a fill-in-the-blank style method for extracting targeted parts of training data from trained ASR models.

Data Augmentation

Public Data-Assisted Mirror Descent for Private Model Training

no code implementations1 Dec 2021 Ehsan Amid, Arun Ganesh, Rajiv Mathews, Swaroop Ramaswamy, Shuang Song, Thomas Steinke, Vinith M. Suriyakumar, Om Thakkar, Abhradeep Thakurta

In this paper, we revisit the problem of using in-distribution public data to improve the privacy/utility trade-offs for differentially private (DP) model training.

Federated Learning

The Role of Adaptive Optimizers for Honest Private Hyperparameter Selection

no code implementations NeurIPS 2021 Shubhankar Mohapatra, Sajin Sasy, Xi He, Gautam Kamath, Om Thakkar

Hyperparameter optimization is a ubiquitous challenge in machine learning, and the performance of a trained model depends crucially upon their effective selection.

BIG-bench Machine Learning Hyperparameter Optimization

Revealing and Protecting Labels in Distributed Training

1 code implementation NeurIPS 2021 Trung Dang, Om Thakkar, Swaroop Ramaswamy, Rajiv Mathews, Peter Chin, Françoise Beaufays

Prior works have demonstrated that labels can be revealed analytically from the last layer of certain models (e. g., ResNet), or they can be reconstructed jointly with model inputs by using Gradients Matching [Zhu et al'19] with additional knowledge about the current state of the model.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Training Production Language Models without Memorizing User Data

no code implementations21 Sep 2020 Swaroop Ramaswamy, Om Thakkar, Rajiv Mathews, Galen Andrew, H. Brendan McMahan, Françoise Beaufays

This paper presents the first consumer-scale next-word prediction (NWP) model trained with Federated Learning (FL) while leveraging the Differentially Private Federated Averaging (DP-FedAvg) technique.

Federated Learning Memorization

Understanding Unintended Memorization in Federated Learning

no code implementations12 Jun 2020 Om Thakkar, Swaroop Ramaswamy, Rajiv Mathews, Françoise Beaufays

In this paper, we initiate a formal study to understand the effect of different components of canonical FL on unintended memorization in trained models, comparing with the central learning setting.

Clustering Federated Learning +1

Evading Curse of Dimensionality in Unconstrained Private GLMs via Private Gradient Descent

no code implementations11 Jun 2020 Shuang Song, Thomas Steinke, Om Thakkar, Abhradeep Thakurta

We show that for unconstrained convex generalized linear models (GLMs), one can obtain an excess empirical risk of $\tilde O\left(\sqrt{{\texttt{rank}}}/\epsilon n\right)$, where ${\texttt{rank}}$ is the rank of the feature matrix in the GLM problem, $n$ is the number of data samples, and $\epsilon$ is the privacy parameter.

LEMMA

Guaranteed Validity for Empirical Approaches to Adaptive Data Analysis

1 code implementation21 Jun 2019 Ryan Rogers, Aaron Roth, Adam Smith, Nathan Srebro, Om Thakkar, Blake Woodworth

We design a general framework for answering adaptive statistical queries that focuses on providing explicit confidence intervals along with point estimates.

valid

Differentially Private Learning with Adaptive Clipping

1 code implementation NeurIPS 2021 Galen Andrew, Om Thakkar, H. Brendan McMahan, Swaroop Ramaswamy

Existing approaches for training neural networks with user-level differential privacy (e. g., DP Federated Averaging) in federated learning (FL) settings involve bounding the contribution of each user's model update by clipping it to some constant value.

Federated Learning

Model-Agnostic Private Learning via Stability

no code implementations14 Mar 2018 Raef Bassily, Om Thakkar, Abhradeep Thakurta

We provide a new technique to boost the average-case stability properties of learning algorithms to strong (worst-case) stability properties, and then exploit them to obtain private classification algorithms.

Binary Classification Classification +2

Differentially Private Matrix Completion Revisited

no code implementations ICML 2018 Prateek Jain, Om Thakkar, Abhradeep Thakurta

We provide the first provably joint differentially private algorithm with formal utility guarantees for the problem of user-level privacy-preserving collaborative filtering.

Collaborative Filtering Matrix Completion +1

Max-Information, Differential Privacy, and Post-Selection Hypothesis Testing

no code implementations13 Apr 2016 Ryan Rogers, Aaron Roth, Adam Smith, Om Thakkar

In this paper, we initiate a principled study of how the generalization properties of approximate differential privacy can be used to perform adaptive hypothesis testing, while giving statistically valid $p$-value corrections.

Two-sample testing valid

Cannot find the paper you are looking for? You can Submit a new open access paper.