Data Poisoning
64 papers with code • 0 benchmarks • 0 datasets
Data Poisoning is an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).
Source: Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics
Benchmarks
These leaderboards are used to track progress in Data Poisoning
Libraries
Use these libraries to find Data Poisoning models and implementationsMost implemented papers
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
The proposed attacks use "clean-labels"; they don't require the attacker to have any control over the labeling of training data.
How To Backdoor Federated Learning
An attacker selected in a single round of federated learning can cause the global model to immediately reach 100% accuracy on the backdoor task.
Certified Defenses for Data Poisoning Attacks
Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model.
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
In this paper, we develop three attacks that can bypass a broad range of common data sanitization defenses, including anomaly detectors based on nearest neighbors, training loss, and singular-value decomposition.
Penalty Method for Inversion-Free Deep Bilevel Optimization
We present results on data denoising, few-shot learning, and training-data poisoning problems in a large-scale setting.
Radioactive data: tracing through training
The mark is robust to strong variations such as different architectures or optimization methods.
MetaPoison: Practical General-purpose Clean-label Data Poisoning
Existing attacks for data poisoning neural networks have relied on hand-crafted heuristics, because solving the poisoning problem directly via bilevel optimization is generally thought of as intractable for deep models.
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Data poisoning and backdoor attacks manipulate training data in order to cause models to fail during inference.
Data Poisoning Attacks Against Federated Learning Systems
Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep neural networks in which participants' data remains on their own devices with only model updates being shared with a central server.
Data Poisoning Attacks on Regression Learning and Corresponding Defenses
Adversarial data poisoning is an effective attack against machine learning and threatens model integrity by introducing poisoned data into the training dataset.