Inference Attack

86 papers with code • 0 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Inference Attack models and implementations

Latest papers with no code

Towards Reliable Empirical Machine Unlearning Evaluation: A Game-Theoretic View

no code yet • 17 Apr 2024

Machine unlearning is the process of updating machine learning models to remove the information of specific training data samples, in order to comply with data protection regulations that allow individuals to request the removal of their personal data.

Hyperparameter Optimization for SecureBoost via Constrained Multi-Objective Federated Learning

no code yet • 6 Apr 2024

This vulnerability may lead the current heuristic hyperparameter configuration of SecureBoost to a suboptimal trade-off between utility, privacy, and efficiency, which are pivotal elements toward a trustworthy federated learning system.

A Federated Parameter Aggregation Method for Node Classification Tasks with Different Graph Network Structures

no code yet • 24 Mar 2024

Additionally, for the privacy security of FLGNN, this paper designs membership inference attack experiments and differential privacy defense experiments.

$\nabla τ$: Gradient-based and Task-Agnostic machine Unlearning

no code yet • 21 Mar 2024

In this study, we introduce Gradient-based and Task-Agnostic machine Unlearning ($\nabla \tau$), an optimization framework designed to remove the influence of a subset of training data efficiently.

Uncertainty, Calibration, and Membership Inference Attacks: An Information-Theoretic Perspective

no code yet • 16 Feb 2024

We derive bounds on the advantage of an MIA adversary with the aim of offering insights into the impact of uncertainty and calibration on the effectiveness of MIAs.

Why Does Differential Privacy with Large Epsilon Defend Against Practical Membership Inference Attacks?

no code yet • 14 Feb 2024

In practical applications, such a worst-case guarantee may be overkill: practical attackers may lack exact knowledge of (nearly all of) the private data, and our data set might be easier to defend, in some sense, than the worst-case data set.

Understanding Practical Membership Privacy of Deep Learning

no code yet • 7 Feb 2024

We apply a state-of-the-art membership inference attack (MIA) to systematically test the practical privacy vulnerability of fine-tuning large image classification models. We focus on understanding the properties of data sets and samples that make them vulnerable to membership inference.

De-identification is not always enough

no code yet • 31 Jan 2024

In this work, we demonstrated that (i) de-identification of real clinical notes does not protect records against a membership inference attack, (ii) proposed a novel approach to generate synthetic clinical notes using the current state-of-the-art large language models, (iii) evaluated the performance of the synthetically generated notes in a clinical domain task, and (iv) proposed a way to mount a membership inference attack where the target model is trained with synthetic data.

Physical Trajectory Inference Attack and Defense in Decentralized POI Recommendation

no code yet • 26 Jan 2024

Empirical results demonstrate that PTIA poses a significant threat to users' historical trajectories.

Inference Attacks Against Face Recognition Model without Classification Layers

no code yet • 24 Jan 2024

To the best of our knowledge, the proposed attack model is the very first in the literature developed for FR models without a classification layer.