Membership Inference Attack

67 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Membership Inference Attack models and implementations

Most implemented papers

Membership Inference Attacks against Machine Learning Models

csong27/membership-inference 18 Oct 2016

We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained.

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

AhmedSalem2/ML-Leaks 4 Jun 2018

In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.

Membership Inference Attacks From First Principles

privacytrustlab/ml_privacy_meter 7 Dec 2021

A membership inference attack allows an adversary to query a trained machine learning model to predict whether or not a particular example was contained in the model's training dataset.

Synthesis of Realistic ECG using Generative Adversarial Networks

Brophy-E/ECG_GAN_MBD 19 Sep 2019

Finally, we discuss the privacy concerns associated with sharing synthetic data produced by GANs and test their ability to withstand a simple membership inference attack.

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

jinyuan-jia/memguard 23 Sep 2019

Specifically, given a black-box access to the target classifier, the attacker trains a binary classifier, which takes a data sample's confidence score vector predicted by the target classifier as an input and predicts the data sample to be a member or non-member of the target classifier's training dataset.

Disparate Vulnerability to Membership Inference Attacks

spring-epfl/disparate-vulnerability 2 Jun 2019

Differential privacy bounds disparate vulnerability but can significantly reduce the accuracy of the model.

Membership Inference Attacks on Machine Learning: A Survey

HongshengHu/membership-inference-machine-learning-literature 14 Mar 2021

In recent years, MIAs have been shown to be effective on various ML models, e. g., classification models and generative models.

Safety and Performance, Why not Both? Bi-Objective Optimized Model Compression toward AI Software Deployment

jiepku/mia-safecompress 11 Aug 2022

By simulating the attack mechanism as the safety test, SafeCompress can automatically compress a big model to a small one following the dynamic sparse training paradigm.

Practical Membership Inference Attacks against Fine-tuned Large Language Models via Self-prompt Calibration

tsinghua-fib-lab/neurips2024_spv-mia 10 Nov 2023

However, this hypothesis heavily relies on the overfitting of target models, which will be mitigated by multiple regularization methods and the generalization of LLMs.

Safety and Performance, Why Not Both? Bi-Objective Optimized Model Compression against Heterogeneous Attacks Toward AI Software Deployment

jiepku/safecompress 2 Jan 2024

To mitigate this issue, AI software compression plays a crucial role, which aims to compress model size while keeping high performance.