Browse > Adversarial > data poisoning

data poisoning

11 papers with code · Adversarial

Leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

How To Backdoor Federated Learning

2 Jul 2018ebagdasa/backdoor_federated_learning

An attacker selected in a single round of federated learning can cause the global model to immediately reach 100% accuracy on the backdoor task.

ANOMALY DETECTION DATA POISONING

Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise

NeurIPS 2018 mmazeika/glc

We utilize trusted data by proposing a loss correction technique that utilizes trusted examples in a data-efficient manner to mitigate the effects of label noise on deep neural network classifiers.

DATA POISONING

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

NeurIPS 2018 ashafahi/inceptionv3-transferLearn-poison

The proposed attacks use "clean-labels"; they don't require the attacker to have any control over the labeling of training data.

DATA POISONING FACE RECOGNITION TRANSFER LEARNING

Radioactive data: tracing through training

3 Feb 2020facebookresearch/radioactive_data

The mark is robust to strong variations such as different architectures or optimization methods.

DATA AUGMENTATION DATA POISONING

MetaPoison: Practical General-purpose Clean-label Data Poisoning

1 Apr 2020wronnyhuang/metapoison

Data poisoning--the process by which an attacker takes control of a model by making imperceptible changes to a subset of the training data--is an emerging threat in the context of neural networks.

AUTOML DATA POISONING

Penalty Method for Inversion-Free Deep Bilevel Optimization

8 Nov 2019jihunhamm/bilevel-penalty

Bilevel optimization problems are at the center of several important machine learning problems such as hyperparameter tuning, data denoising, meta- and few-shot learning, data poisoning.

BILEVEL OPTIMIZATION DATA POISONING DENOISING FEW-SHOT LEARNING OMNIGLOT

On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping

26 Feb 2020Sanghyun-Hong/Gradient-Shaping

In this work, we study the feasibility of an attack-agnostic defense relying on artifacts that are common to all poisoning attacks.

DATA POISONING

Poisoning Attacks with Generative Adversarial Nets

ICLR 2020 lmunoz-gonzalez/Poisoning-Attacks-with-Back-gradient-Optimization

In this paper we introduce a novel generative model to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i. e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training.

DATA POISONING

Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection

8 Feb 2018lmunoz-gonzalez/Poisoning-Attacks-with-Back-gradient-Optimization

We show empirically that the adversarial examples generated by these attack strategies are quite different from genuine points, as no detectability constrains are considered to craft the attack.

ANOMALY DETECTION DATA POISONING NETWORK INTRUSION DETECTION OUTLIER DETECTION