Search Results for author: Reza Shokri

Found 34 papers, 13 papers with code

Low-Cost High-Power Membership Inference Attacks

no code implementations6 Dec 2023 Sajjad Zarifzadeh, Philippe Liu, Reza Shokri

Under computation constraints, where only a limited number of pre-trained reference models (as few as 1) are available, and also when we vary other elements of the attack, our method performs exceptionally well, unlike some prior attacks that approach random guessing.

Inference Attack Membership Inference Attack

Leave-one-out Distinguishability in Machine Learning

1 code implementation29 Sep 2023 Jiayuan Ye, Anastasia Borovykh, Soufiane Hayou, Reza Shokri

We introduce an analytical framework to quantify the changes in a machine learning algorithm's output distribution following the inclusion of a few data points in its training set, a notion we define as leave-one-out distinguishability (LOOD).

Gaussian Processes Memorization

Share Your Representation Only: Guaranteed Improvement of the Privacy-Utility Tradeoff in Federated Learning

1 code implementation11 Sep 2023 Zebang Shen, Jiayuan Ye, Anmin Kang, Hamed Hassani, Reza Shokri

Repeated parameter sharing in federated learning causes significant information leakage about private data, thus defeating its main purpose: data privacy.

Federated Learning Image Classification +1

Bias Propagation in Federated Learning

1 code implementation5 Sep 2023 Hongyan Chang, Reza Shokri

Our work calls for auditing group fairness in federated learning and designing learning algorithms that are robust to bias propagation.

Fairness Federated Learning

On The Impact of Machine Learning Randomness on Group Fairness

no code implementations9 Jul 2023 Prakhar Ganesh, Hongyan Chang, Martin Strobel, Reza Shokri

We investigate the impact on group fairness of different sources of randomness in training neural networks.

Fairness

Smaller Language Models are Better Black-box Machine-Generated Text Detectors

no code implementations17 May 2023 Niloofar Mireshghallah, Justus Mattern, Sicun Gao, Reza Shokri, Taylor Berg-Kirkpatrick

With the advent of fluent generative language models that can produce convincing utterances very similar to those written by humans, distinguishing whether a piece of text is machine-generated or human-written becomes more challenging and more important, as such models could be used to spread misinformation, fake news, fake reviews and to mimic certain authors and figures.

Misinformation

Data Privacy and Trustworthy Machine Learning

no code implementations14 Sep 2022 Martin Strobel, Reza Shokri

The privacy risks of machine learning models is a major concern when training them on sensitive and personal data.

Fairness

Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets

no code implementations31 Mar 2022 Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, Nicholas Carlini

We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak significant private details of training points belonging to other parties.

Attribute BIG-bench Machine Learning

Differentially Private Learning Needs Hidden State (Or Much Faster Convergence)

no code implementations10 Mar 2022 Jiayuan Ye, Reza Shokri

We prove that, in these settings, our privacy bound converges exponentially fast and is substantially smaller than the composition bounds, notably after a few number of training epochs.

Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks

no code implementations8 Mar 2022 FatemehSadat Mireshghallah, Kartik Goyal, Archit Uniyal, Taylor Berg-Kirkpatrick, Reza Shokri

The wide adoption and application of Masked language models~(MLMs) on sensitive data (from legal to medical) necessitates a thorough quantitative investigation into their privacy vulnerabilities -- to what extent do MLMs leak information about their training data?

Inference Attack Membership Inference Attack +1

What Does it Mean for a Language Model to Preserve Privacy?

no code implementations11 Feb 2022 Hannah Brown, Katherine Lee, FatemehSadat Mireshghallah, Reza Shokri, Florian Tramèr

Language models lack the ability to understand the context and sensitivity of text, and tend to memorize phrases present in their training sets.

Language Modelling

Privacy Auditing of Machine Learning using Membership Inference Attacks

no code implementations29 Sep 2021 Jiayuan Ye, Aadyaa Maddi, Sasi Kumar Murakonda, Reza Shokri

In this paper, we present a framework that can explain the implicit assumptions and also the simplifications made in the prior work.

BIG-bench Machine Learning

Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient Descent

no code implementations NeurIPS 2021 Rishav Chourasia, Jiayuan Ye, Reza Shokri

What is the information leakage of an iterative randomized learning algorithm about its training data, when the internal state of the algorithm is \emph{private}?

On the Privacy Risks of Algorithmic Fairness

1 code implementation7 Nov 2020 Hongyan Chang, Reza Shokri

We show that fairness comes at the cost of privacy, and this cost is not distributed equally: the information leakage of fair models increases significantly on the unprivileged subgroups, which are the ones for whom we need fair learning.

BIG-bench Machine Learning Decision Making +1

SOTERIA: In Search of Efficient Neural Networks for Private Inference

1 code implementation25 Jul 2020 Anshul Aggarwal, Trevor E. Carlson, Reza Shokri, Shruti Tople

In this setting, our objective is to protect the confidentiality of both the users' input queries as well as the model parameters at the server, with modest computation and communication overhead.

Neural Architecture Search

Improving Deep Learning with Differential Privacy using Gradient Encoding and Denoising

no code implementations22 Jul 2020 Milad Nasr, Reza Shokri, Amir Houmansadr

We show that our mechanism outperforms the state-of-the-art DPSGD; for instance, for the same model accuracy of $96. 1\%$ on MNIST, our technique results in a privacy bound of $\epsilon=3. 2$ compared to $\epsilon=6$ of DPSGD, which is a significant improvement.

Denoising

ML Privacy Meter: Aiding Regulatory Compliance by Quantifying the Privacy Risks of Machine Learning

1 code implementation18 Jul 2020 Sasi Kumar Murakonda, Reza Shokri

In addition to the threats of illegitimate access to data through security breaches, machine learning models pose an additional privacy risk to the data by indirectly revealing about it through the model predictions and parameters.

BIG-bench Machine Learning Inference Attack +1

Model Explanations with Differential Privacy

no code implementations16 Jun 2020 Neel Patel, Reza Shokri, Yair Zick

The drawback is that model explanations can leak information about the training data and the explanation data used to generate them, thus undermining data privacy.

Decision Making

Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer

no code implementations24 Dec 2019 Hongyan Chang, Virat Shejwalkar, Reza Shokri, Amir Houmansadr

Collaborative (federated) learning enables multiple parties to train a model without sharing their private data, but through repeated sharing of the parameters of their local models.

Federated Learning Privacy Preserving +1

Robust Membership Encoding: Inference Attacks and Copyright Protection for Deep Learning

no code implementations27 Sep 2019 Congzheng Song, Reza Shokri

In this paper, we present \emph{membership encoding} for training deep neural networks and encoding the membership information, i. e. whether a data point is used for training, for a subset of training data.

Model Compression

On the Privacy Risks of Model Explanations

no code implementations29 Jun 2019 Reza Shokri, Martin Strobel, Yair Zick

We analyze connections between model explanations and the leakage of sensitive information about the model's training set.

Bypassing Backdoor Detection Algorithms in Deep Learning

1 code implementation31 May 2019 Te Juin Lester Tan, Reza Shokri

Many detection algorithms are designed to detect backdoors on input samples or model parameters, through the statistical difference between the latent representations of adversarial and clean input samples in the poisoned model.

Quantifying the Privacy Risks of Learning High-Dimensional Graphical Models

no code implementations29 May 2019 Sasi Kumar Murakonda, Reza Shokri, George Theodorakopoulos

It provides a measure of the potential leakage of a model given its structure, as a function of the model complexity and the size of the training set.

Inference Attack Vocal Bursts Intensity Prediction

Privacy Risks of Securing Machine Learning Models against Adversarial Examples

1 code implementation24 May 2019 Liwei Song, Reza Shokri, Prateek Mittal

To perform the membership inference attacks, we leverage the existing inference methods that exploit model predictions.

Adversarial Defense BIG-bench Machine Learning +1

Machine Learning with Membership Privacy using Adversarial Regularization

1 code implementation16 Jul 2018 Milad Nasr, Reza Shokri, Amir Houmansadr

In this paper, we focus on such attacks against black-box models, where the adversary can only observe the output of the model, but not its parameters.

BIG-bench Machine Learning General Classification +2

Chiron: Privacy-preserving Machine Learning as a Service

no code implementations15 Mar 2018 Tyler Hunt, Congzheng Song, Reza Shokri, Vitaly Shmatikov, Emmett Witchel

Existing ML-as-a-service platforms require users to reveal all training data to the service operator.

Cryptography and Security

Plausible Deniability for Privacy-Preserving Data Synthesis

no code implementations26 Aug 2017 Vincent Bindschaedler, Reza Shokri, Carl A. Gunter

We demonstrate the efficiency of this generative technique on a large dataset; it is shown to preserve the utility of original data with respect to various statistical analysis and machine learning measures.

De-identification Privacy Preserving

Membership Inference Attacks against Machine Learning Models

12 code implementations18 Oct 2016 Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov

We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained.

BIG-bench Machine Learning General Classification +2

Defeating Image Obfuscation with Deep Learning

no code implementations1 Sep 2016 Richard McPherson, Reza Shokri, Vitaly Shmatikov

We demonstrate that modern image recognition methods based on artificial neural networks can recover hidden information from images protected by various forms of obfuscation.

Privacy Preserving

Privacy Games: Optimal User-Centric Data Obfuscation

no code implementations14 Feb 2014 Reza Shokri

We optimize utility subject to a joint guarantee of differential privacy (indistinguishability) and distortion privacy (inference error).

Cryptography and Security Computer Science and Game Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.