1 code implementation • 25 May 2023 • Paul Stahlhofen, André Artelt, Luca Hermes, Barbara Hammer
Many Machine Learning models are vulnerable to adversarial attacks: There exist methodologies that add a small (imperceptible) perturbation to an input such that the model comes up with a wrong prediction.