Data Poisoning

64 papers with code • 0 benchmarks • 0 datasets

Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).

Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics


Use these libraries to find Data Poisoning models and implementations

Most implemented papers

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

ashafahi/inceptionv3-transferLearn-poison NeurIPS 2018

The proposed attacks use "clean-labels"; they don't require the attacker to have any control over the labeling of training data.

How To Backdoor Federated Learning

ebagdasa/backdoor_federated_learning 2 Jul 2018

An attacker selected in a single round of federated learning can cause the global model to immediately reach 100% accuracy on the backdoor task.

Certified Defenses for Data Poisoning Attacks

worksheets/0xbdd35bdd NeurIPS 2017

Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model.

Stronger Data Poisoning Attacks Break Data Sanitization Defenses

kohpangwei/data-poisoning-journal-release 2 Nov 2018

In this paper, we develop three attacks that can bypass a broad range of common data sanitization defenses, including anomaly detectors based on nearest neighbors, training loss, and singular-value decomposition.

Penalty Method for Inversion-Free Deep Bilevel Optimization

jihunhamm/bilevel-penalty 8 Nov 2019

We present results on data denoising, few-shot learning, and training-data poisoning problems in a large-scale setting.

Radioactive data: tracing through training

facebookresearch/radioactive_data ICML 2020

The mark is robust to strong variations such as different architectures or optimization methods.

MetaPoison: Practical General-purpose Clean-label Data Poisoning

wronnyhuang/metapoison NeurIPS 2020

Existing attacks for data poisoning neural networks have relied on hand-crafted heuristics, because solving the poisoning problem directly via bilevel optimization is generally thought of as intractable for deep models.

Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks

aks2203/poisoning-benchmark 22 Jun 2020

Data poisoning and backdoor attacks manipulate training data in order to cause models to fail during inference.

Data Poisoning Attacks Against Federated Learning Systems

git-disl/DataPoisoning_FL 16 Jul 2020

Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep neural networks in which participants' data remains on their own devices with only model updates being shared with a central server.

Data Poisoning Attacks on Regression Learning and Corresponding Defenses

Fraunhofer-AISEC/regression_data_poisoning 15 Sep 2020

Adversarial data poisoning is an effective attack against machine learning and threatens model integrity by introducing poisoned data into the training dataset.