# Inference Attack

60 papers with code • 0 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

## Libraries

Use these libraries to find Inference Attack models and implementations
2 papers
162
2 papers
115

# Membership Inference Attacks against Machine Learning Models

18 Oct 2016

We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained.

11

# ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

4 Jun 2018

In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.

6

# Synthesis of Realistic ECG using Generative Adversarial Networks

19 Sep 2019

Finally, we discuss the privacy concerns associated with sharing synthetic data produced by GANs and test their ability to withstand a simple membership inference attack.

3

# Disparate Vulnerability to Membership Inference Attacks

2 Jun 2019

Differential privacy bounds disparate vulnerability but can significantly reduce the accuracy of the model.

2

# MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

23 Sep 2019

Specifically, given a black-box access to the target classifier, the attacker trains a binary classifier, which takes a data sample's confidence score vector predicted by the target classifier as an input and predicts the data sample to be a member or non-member of the target classifier's training dataset.

2

# Quantifying identifiability to choose and audit $ε$ in differentially private deep learning

We transform $(\epsilon,\delta)$ to a bound on the Bayesian posterior belief of the adversary assumed by differential privacy concerning the presence of any record in the training dataset.

2

# Membership Inference Attacks on Machine Learning: A Survey

In recent years, MIAs have been shown to be effective on various ML models, e. g., classification models and generative models.

2

In this work, we introduce GradInversion, using which input images from a larger batch (8 - 48 images) can also be recovered for large networks such as ResNets (50 layers), on complex datasets such as ImageNet (1000 classes, 224x224 px).

2

# Formalizing and Estimating Distribution Inference Risks

13 Sep 2021

Distribution inference attacks can pose serious risks when models are trained on private data, but are difficult to distinguish from the intrinsic purpose of statistical machine learning -- namely, to produce models that capture statistical properties about a distribution.

2

# Dissecting Distribution Inference

15 Dec 2022

A distribution inference attack aims to infer statistical properties of data used to train machine learning models.

2