Federated learning (FL) has recently emerged as a new form of collaborative machine learning, where a common model can be learned while keeping all the training data on local devices.
Specifically, given a black-box access to the target classifier, the attacker trains a binary classifier, which takes a data sample's confidence score vector predicted by the target classifier as an input and predicts the data sample to be a member or non-member of the target classifier's training dataset.
Finally, we discuss the privacy concerns associated with sharing synthetic data produced by GANs and test their ability to withstand a simple membership inference attack.
To defend against inference attacks, we can add carefully crafted noise into the public data to turn them into adversarial examples, such that attackers' classifiers make incorrect predictions for the private data.
In this paper, we focus on membership inference attack against GANs that has the potential to reveal information about victim models' training data.
With the prevalence of machine learning services, crowdsourced data containing sensitive information poses substantial privacy challenges.
In this work, we present a new defense against membership inference attacks that preserves the utility of the target machine learning models significantly better than prior defenses.
We present two information leakage attacks that outperform previous work on membership inference against generative models.
A membership inference attack (MIA) against a machine learning model enables an attacker to determine whether a given data record was part of the model's training dataset or not.