Inference Attack

86 papers with code • 0 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Inference Attack models and implementations

Latest papers with no code

Differentially Private and Adversarially Robust Machine Learning: An Empirical Evaluation

no code yet • 18 Jan 2024

Malicious adversaries can attack machine learning models to infer sensitive information or damage the system by launching a series of evasion attacks.

Reinforcement Unlearning

no code yet • 26 Dec 2023

Machine unlearning refers to the process of mitigating the influence of specific training data on machine learning models based on removal requests from data owners.

Task Contamination: Language Models May Not Be Few-Shot Anymore

no code yet • 26 Dec 2023

Large language models (LLMs) offer impressive performance in various zero-shot and few-shot tasks.

Adaptive Domain Inference Attack

no code yet • 22 Dec 2023

As deep neural networks are increasingly deployed in sensitive application domains, such as healthcare and security, it's necessary to understand what kind of sensitive information can be inferred from these models.

Poincaré Differential Privacy for Hierarchy-Aware Graph Embedding

no code yet • 19 Dec 2023

Specifically, PoinDP first learns the hierarchy weights for each entity based on the Poincar\'e model in hyperbolic space.

Low-Cost High-Power Membership Inference Attacks

no code yet • 6 Dec 2023

Under computation constraints, where only a limited number of pre-trained reference models (as few as 1) are available, and also when we vary other elements of the attack, our method performs exceptionally well, unlike some prior attacks that approach random guessing.

Practical Membership Inference Attacks against Fine-tuned Large Language Models via Self-prompt Calibration

no code yet • 10 Nov 2023

Prior attempts have quantified the privacy risks of language models (LMs) via MIAs, but there is still no consensus on whether existing MIA algorithms can cause remarkable privacy leakage on practical Large Language Models (LLMs).

Preserving Privacy in GANs Against Membership Inference Attack

no code yet • 6 Nov 2023

In the present work, the overfitting in GANs is studied in terms of the discriminator, and a more general measure of overfitting based on the Bhattacharyya coefficient is defined.

Black-Box Training Data Identification in GANs via Detector Networks

no code yet • 18 Oct 2023

In this paper we study whether given access to a trained GAN, as well as fresh samples from the underlying distribution, if it is possible for an attacker to efficiently identify if a given point is a member of the GAN's training data.

A Comprehensive Study of Privacy Risks in Curriculum Learning

no code yet • 16 Oct 2023

Training a machine learning model with data following a meaningful order, i. e., from easy to hard, has been proven to be effective in accelerating the training process and achieving better model performance.