no code implementations • 30 Sep 2023 • Alina Elena Baia, Valentina Poggioni, Andrea Cavallaro
We show that we can create adversarial images that manipulate the explanations of an activity recognition model by having access only to its final output.
no code implementations • 29 Sep 2021 • Alina Elena Baia, Alfredo Milani, Valentina Poggioni
It is well known that deep learning models are susceptible to adversarial attacks.