Neural Network Security
2 papers with code • 0 benchmarks • 1 datasets
This task has no description! Would you like to contribute one?
Benchmarks
These leaderboards are used to track progress in Neural Network Security
You can find evaluation results in the subtasks. You can also
submitting
evaluation metrics for this task.
Most implemented papers
Hacking Neural Networks: A Short Introduction
A large chunk of research on the security issues of neural networks is focused on adversarial attacks.
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis
To overcome this challenge, we propose the Attacking Distance-aware Attack (ADA) to enhance a poisoning attack by finding the optimized target class in the feature space.