no code implementations • 27 Feb 2024 • Daniele Angioni, Luca Demetrio, Maura Pintor, Luca Oneto, Davide Anguita, Battista Biggio, Fabio Roli
In this work, we show that this problem also affects robustness to adversarial examples, thereby hindering the development of secure model update practices.
no code implementations • 19 Sep 2023 • Emanuele Ledda, Daniele Angioni, Giorgio Piras, Giorgio Fumera, Battista Biggio, Fabio Roli
Machine-learning models can be fooled by adversarial examples, i. e., carefully-crafted input perturbations that force models to output wrong predictions.
1 code implementation • 7 Mar 2022 • Maura Pintor, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, Fabio Roli
We showcase the usefulness of this dataset by testing the effectiveness of the computed patches against 127 models.
no code implementations • 7 Jul 2020 • Roberto Casula, Giulia Orrù, Daniele Angioni, Xiaoyi Feng, Gian Luca Marcialis, Fabio Roli
We investigated the threat level of realistic attacks using latent fingerprints against sensors equipped with state-of-art liveness detectors and fingerprint verification systems which integrate such liveness algorithms.