1 code implementation • 7 Mar 2022 • Maura Pintor, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, Fabio Roli
We showcase the usefulness of this dataset by testing the effectiveness of the computed patches against 127 models.
1 code implementation • NeurIPS Workshop ImageNet_PPF 2021 • Utku Ozbulak, Maura Pintor, Arnout Van Messem, Wesley De Neve
We find that $71\%$ of the adversarial examples that achieve model-to-model adversarial transferability are misclassified into one of the top-5 classes predicted for the underlying source images.
no code implementations • 26 Aug 2021 • Yang Zheng, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Maura Pintor, Battista Biggio, Fabio Roli
Adversarial reprogramming allows repurposing a machine-learning model to perform a different task.
2 code implementations • ICML Workshop AML 2021 • Maura Pintor, Luca Demetrio, Angelo Sotgiu, Giovanni Manca, Ambra Demontis, Nicholas Carlini, Battista Biggio, Fabio Roli
Although guidelines and best practices have been suggested to improve current adversarial robustness evaluations, the lack of automatic testing and debugging tools makes it difficult to apply these recommendations in a systematic manner.
3 code implementations • NeurIPS 2021 • Maura Pintor, Fabio Roli, Wieland Brendel, Battista Biggio
Evaluating adversarial robustness amounts to finding the minimum perturbation needed to have an input sample misclassified.
no code implementations • 13 Oct 2020 • Giulia Orrù, Davide Ghiani, Maura Pintor, Gian Luca Marcialis, Fabio Roli
We present a novel descriptor for crowd behavior analysis and anomaly detection.
2 code implementations • 20 Dec 2019 • Maura Pintor, Luca Demetrio, Angelo Sotgiu, Marco Melis, Ambra Demontis, Battista Biggio
We present \texttt{secml}, an open-source Python library for secure and explainable machine learning.
no code implementations • 8 Sep 2018 • Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli
Transferability captures the ability of an attack against a machine-learning model to be effective against a different, potentially unknown, model.