Data Poisoning

38 papers with code • 0 benchmarks • 0 datasets

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

Datasets


Greatest papers with code

Backdoor Learning: A Survey

THUYimingLi/backdoor-learning-resources 17 Jul 2020

Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs), such that the attacked model performs well on benign samples, whereas its prediction will be maliciously changed if the hidden backdoor is activated by the attacker-defined trigger.

Adversarial Attack Data Poisoning

How To Backdoor Federated Learning

ebagdasa/backdoor_federated_learning 2 Jul 2018

An attacker selected in a single round of federated learning can cause the global model to immediately reach 100% accuracy on the backdoor task.

Anomaly Detection Data Poisoning +1

Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise

mmazeika/glc NeurIPS 2018

We utilize trusted data by proposing a loss correction technique that utilizes trusted examples in a data-efficient manner to mitigate the effects of label noise on deep neural network classifiers.

Data Poisoning

Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks

aks2203/poisoning-benchmark 22 Jun 2020

Data poisoning and backdoor attacks manipulate training data in order to cause models to fail during inference.

Data Poisoning

A Distributed Trust Framework for Privacy-Preserving Machine Learning

OpenMined/PyDentity 3 Jun 2020

Privacy-preserving techniques distribute computation in order to ensure that data remains in the control of the owner while learning takes place.

Data Poisoning Federated Learning

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

ashafahi/inceptionv3-transferLearn-poison NeurIPS 2018

The proposed attacks use "clean-labels"; they don't require the attacker to have any control over the labeling of training data.

Data Poisoning Face Recognition +1

Data Poisoning Attacks Against Federated Learning Systems

git-disl/DataPoisoning_FL 16 Jul 2020

Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep neural networks in which participants' data remains on their own devices with only model updates being shared with a central server.

Data Poisoning Federated Learning

Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching

JonasGeiping/poisoning-gradient-matching ICLR 2021

We consider a particularly malicious poisoning attack that is both "from scratch" and "clean label", meaning we analyze an attack that successfully works against new, randomly initialized models, and is nearly imperceptible to humans, all while perturbing only a small fraction of the training data.

Data Poisoning

MetaPoison: Practical General-purpose Clean-label Data Poisoning

JonasGeiping/poisoning-gradient-matching NeurIPS 2020

Existing attacks for data poisoning neural networks have relied on hand-crafted heuristics, because solving the poisoning problem directly via bilevel optimization is generally thought of as intractable for deep models.

AutoML bilevel optimization +2

Radioactive data: tracing through training

facebookresearch/radioactive_data ICML 2020

The mark is robust to strong variations such as different architectures or optimization methods.

Data Augmentation Data Poisoning