Membership Inference Attack

55 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Membership Inference Attack models and implementations

Most implemented papers

Reconstruction and Membership Inference Attacks against Generative Models

SAP-samples/security-research-membership-inference-against-generative-networks 7 Jun 2019

We present two information leakage attacks that outperform previous work on membership inference against generative models.

GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models

DingfanChen/GAN-Leaks 9 Sep 2019

In addition, we propose the first generic attack model that can be instantiated in a large range of settings and is applicable to various kinds of deep generative models.

An Empirical Study on the Intrinsic Privacy of SGD

microsoft/intrinsic-private-sgd 5 Dec 2019

Introducing noise in the training of machine learning systems is a powerful way to protect individual privacy via differential privacy guarantees, but comes at a cost to utility.

Assessing differentially private deep learning with Membership Inference

SAP-samples/security-research-membership-inference-and-differential-privacy 24 Dec 2019

We empirically compare local and central differential privacy mechanisms under white- and black-box membership inference to evaluate their relative privacy-accuracy trade-offs.

Membership Inference Attacks Against Object Detection Models

yechanp/Membership-Inference-Attacks-Against-Object-Detection-Models 12 Jan 2020

Machine learning models can leak information regarding the dataset they have trained.

Data and Model Dependencies of Membership Inference Attack

SJabin/Data_Model_Dependencies_MIA 17 Feb 2020

Our results reveal the relationship between MIA accuracy and properties of the dataset and training model in use.

When Machine Unlearning Jeopardizes Privacy

MinChen00/UnlearningLeaks 5 May 2020

More importantly, we show that our attack in multiple cases outperforms the classical membership inference attack on the original ML model, which indicates that machine unlearning can have counterproductive effects on privacy.

ML Privacy Meter: Aiding Regulatory Compliance by Quantifying the Privacy Risks of Machine Learning

privacytrustlab/ml_privacy_meter 18 Jul 2020

In addition to the threats of illegitimate access to data through security breaches, machine learning models pose an additional privacy risk to the data by indirectly revealing about it through the model predictions and parameters.

Investigating Membership Inference Attacks under Data Dependencies

t3humphries/non-iid-mia 23 Oct 2020

Our results reveal that training set dependencies can severely increase the performance of MIAs, and therefore assuming that data samples are statistically independent can significantly underestimate the performance of MIAs.

Practical Blind Membership Inference Attack via Differential Comparisons

hyhmia/BlindMI 5 Jan 2021

The success of the former heavily depends on the quality of the shadow model, i. e., the transferability between the shadow and the target; the latter, given only blackbox probing access to the target model, cannot make an effective inference of unknowns, compared with MI attacks using shadow models, due to the insufficient number of qualified samples labeled with ground truth membership information.