About

Benchmarks

No evaluation results yet. Help compare methods by submit evaluation metrics.

Datasets

Greatest papers with code

ML Privacy Meter: Aiding Regulatory Compliance by Quantifying the Privacy Risks of Machine Learning

18 Jul 2020privacytrustlab/ml_privacy_meter

In addition to the threats of illegitimate access to data through security breaches, machine learning models pose an additional privacy risk to the data by indirectly revealing about it through the model predictions and parameters.

INFERENCE ATTACK

Revisiting Membership Inference Under Realistic Assumptions

21 May 2020bargavj/EvaluatingDPML

Since previous inference attacks fail in imbalanced prior setting, we develop a new inference attack based on the intuition that inputs corresponding to training set members will be near a local minimum in the loss function, and show that an attack that combines this with thresholds on the per-instance loss can achieve high PPV even in settings where other attacks appear to be ineffective.

INFERENCE ATTACK

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

4 Jun 2018Lab41/cyphercat

In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.

INFERENCE ATTACK

Membership Inference Attacks against Machine Learning Models

18 Oct 2016spring-epfl/mia

We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained.

INFERENCE ATTACK

Privacy Risks of Securing Machine Learning Models against Adversarial Examples

24 May 2019inspire-group/privacy-vs-robustness

To perform the membership inference attacks, we leverage the existing inference methods that exploit model predictions.

ADVERSARIAL DEFENSE INFERENCE ATTACK

Synthesis of Realistic ECG using Generative Adversarial Networks

19 Sep 2019Brophy-E/ECG_GAN_MBD

Finally, we discuss the privacy concerns associated with sharing synthetic data produced by GANs and test their ability to withstand a simple membership inference attack.

INFERENCE ATTACK TIME SERIES

GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models

9 Sep 2019DingfanChen/GAN-Leaks

In addition, we propose the first generic attack model that can be instantiated in a large range of settings and is applicable to various kinds of deep generative models.

INFERENCE ATTACK

Understanding Membership Inferences on Well-Generalized Learning Models

13 Feb 2018BielStela/membership_inference

Membership Inference Attack (MIA) determines the presence of a record in a machine learning model's training data by querying the model.

INFERENCE ATTACK

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

23 Sep 2019jjy1994/MemGuard

Specifically, given a black-box access to the target classifier, the attacker trains a binary classifier, which takes a data sample's confidence score vector predicted by the target classifier as an input and predicts the data sample to be a member or non-member of the target classifier's training dataset.

INFERENCE ATTACK

Systematic Evaluation of Privacy Risks of Machine Learning Models

24 Mar 2020inspire-group/membership-inference-evaluation

Machine learning models are prone to memorizing sensitive data, making them vulnerable to membership inference attacks in which an adversary aims to guess if an input sample was used to train the model.

INFERENCE ATTACK