Inference Attack
86 papers with code • 0 benchmarks • 2 datasets
Benchmarks
These leaderboards are used to track progress in Inference Attack
Libraries
Use these libraries to find Inference Attack models and implementationsLatest papers with no code
Towards Reliable Empirical Machine Unlearning Evaluation: A Game-Theoretic View
Machine unlearning is the process of updating machine learning models to remove the information of specific training data samples, in order to comply with data protection regulations that allow individuals to request the removal of their personal data.
Hyperparameter Optimization for SecureBoost via Constrained Multi-Objective Federated Learning
This vulnerability may lead the current heuristic hyperparameter configuration of SecureBoost to a suboptimal trade-off between utility, privacy, and efficiency, which are pivotal elements toward a trustworthy federated learning system.
A Federated Parameter Aggregation Method for Node Classification Tasks with Different Graph Network Structures
Additionally, for the privacy security of FLGNN, this paper designs membership inference attack experiments and differential privacy defense experiments.
$\nabla τ$: Gradient-based and Task-Agnostic machine Unlearning
In this study, we introduce Gradient-based and Task-Agnostic machine Unlearning ($\nabla \tau$), an optimization framework designed to remove the influence of a subset of training data efficiently.
Uncertainty, Calibration, and Membership Inference Attacks: An Information-Theoretic Perspective
We derive bounds on the advantage of an MIA adversary with the aim of offering insights into the impact of uncertainty and calibration on the effectiveness of MIAs.
Why Does Differential Privacy with Large Epsilon Defend Against Practical Membership Inference Attacks?
In practical applications, such a worst-case guarantee may be overkill: practical attackers may lack exact knowledge of (nearly all of) the private data, and our data set might be easier to defend, in some sense, than the worst-case data set.
Understanding Practical Membership Privacy of Deep Learning
We apply a state-of-the-art membership inference attack (MIA) to systematically test the practical privacy vulnerability of fine-tuning large image classification models. We focus on understanding the properties of data sets and samples that make them vulnerable to membership inference.
De-identification is not always enough
In this work, we demonstrated that (i) de-identification of real clinical notes does not protect records against a membership inference attack, (ii) proposed a novel approach to generate synthetic clinical notes using the current state-of-the-art large language models, (iii) evaluated the performance of the synthetically generated notes in a clinical domain task, and (iv) proposed a way to mount a membership inference attack where the target model is trained with synthetic data.
Physical Trajectory Inference Attack and Defense in Decentralized POI Recommendation
Empirical results demonstrate that PTIA poses a significant threat to users' historical trajectories.
Inference Attacks Against Face Recognition Model without Classification Layers
To the best of our knowledge, the proposed attack model is the very first in the literature developed for FR models without a classification layer.