1 code implementation • 24 Jun 2020 • Matthew Jagielski, Giorgio Severi, Niklas Pousette Harger, Alina Oprea
Poisoning attacks against machine learning induce adversarial modification of data used by a machine learning algorithm to selectively change its output when it is deployed.