no code implementations • 30 Sep 2023 • Alina Elena Baia, Valentina Poggioni, Andrea Cavallaro
We show that we can create adversarial images that manipulate the explanations of an activity recognition model by having access only to its final output.
no code implementations • 2 Aug 2022 • Tommaso Tedeschi, Diego Ciangottini, Marco Baioletti, Valentina Poggioni, Daniele Spiga, Loriano Storchi, Mirco Tracolli
The continuous growth of data production in almost all scientific areas raises new problems in data access and management, especially in a scenario where the end-users, as well as the resources that they can access, are worldwide distributed.
no code implementations • 29 Sep 2021 • Alina Elena Baia, Alfredo Milani, Valentina Poggioni
It is well known that deep learning models are susceptible to adversarial attacks.