Membership Inference Attack
67 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Membership Inference Attack
Libraries
Use these libraries to find Membership Inference Attack models and implementationsMost implemented papers
Membership Inference Attacks against Machine Learning Models
We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained.
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.
Membership Inference Attacks From First Principles
A membership inference attack allows an adversary to query a trained machine learning model to predict whether or not a particular example was contained in the model's training dataset.
Synthesis of Realistic ECG using Generative Adversarial Networks
Finally, we discuss the privacy concerns associated with sharing synthetic data produced by GANs and test their ability to withstand a simple membership inference attack.
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
Specifically, given a black-box access to the target classifier, the attacker trains a binary classifier, which takes a data sample's confidence score vector predicted by the target classifier as an input and predicts the data sample to be a member or non-member of the target classifier's training dataset.
Disparate Vulnerability to Membership Inference Attacks
Differential privacy bounds disparate vulnerability but can significantly reduce the accuracy of the model.
Membership Inference Attacks on Machine Learning: A Survey
In recent years, MIAs have been shown to be effective on various ML models, e. g., classification models and generative models.
Safety and Performance, Why not Both? Bi-Objective Optimized Model Compression toward AI Software Deployment
By simulating the attack mechanism as the safety test, SafeCompress can automatically compress a big model to a small one following the dynamic sparse training paradigm.
Practical Membership Inference Attacks against Fine-tuned Large Language Models via Self-prompt Calibration
However, this hypothesis heavily relies on the overfitting of target models, which will be mitigated by multiple regularization methods and the generalization of LLMs.
Safety and Performance, Why Not Both? Bi-Objective Optimized Model Compression against Heterogeneous Attacks Toward AI Software Deployment
To mitigate this issue, AI software compression plays a crucial role, which aims to compress model size while keeping high performance.