2 code implementations • ICML Workshop AML 2021 • Maura Pintor, Luca Demetrio, Angelo Sotgiu, Ambra Demontis, Nicholas Carlini, Battista Biggio, Fabio Roli
Evaluating robustness of machine-learning models to adversarial examples is a challenging problem.
3 code implementations • NeurIPS 2021 • Maura Pintor, Fabio Roli, Wieland Brendel, Battista Biggio
Evaluating adversarial robustness amounts to finding the minimum perturbation needed to have an input sample misclassified.
2 code implementations • 20 Dec 2019 • Maura Pintor, Luca Demetrio, Angelo Sotgiu, Marco Melis, Ambra Demontis, Battista Biggio
We present \texttt{secml}, an open-source Python library for secure and explainable machine learning.
2 code implementations • 2 Feb 2024 • Antonio Emanuele Cinà, Francesco Villani, Maura Pintor, Lea Schönherr, Battista Biggio, Marcello Pelillo
Evaluating the adversarial robustness of deep networks to gradient-based attacks is challenging.
1 code implementation • 7 Mar 2022 • Maura Pintor, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, Fabio Roli
We showcase the usefulness of this dataset by testing the effectiveness of the computed patches against 127 models.
1 code implementation • NeurIPS Workshop ImageNet_PPF 2021 • Utku Ozbulak, Maura Pintor, Arnout Van Messem, Wesley De Neve
We find that $71\%$ of the adversarial examples that achieve model-to-model adversarial transferability are misclassified into one of the top-5 classes predicted for the underlying source images.
1 code implementation • 12 Oct 2023 • Giuseppe Floris, Raffaele Mura, Luca Scionis, Giorgio Piras, Maura Pintor, Ambra Demontis, Battista Biggio
Evaluating the adversarial robustness of machine learning models using gradient-based attacks is challenging.
1 code implementation • 4 Oct 2023 • Biagio Montaruli, Luca Demetrio, Maura Pintor, Luca Compagna, Davide Balzarotti, Battista Biggio
Machine-learning phishing webpage detectors (ML-PWD) have been shown to suffer from adversarial manipulations of the HTML code of the input webpage.
no code implementations • 8 Sep 2018 • Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli
Transferability captures the ability of an attack against a machine-learning model to be effective against a different, potentially unknown, model.
no code implementations • 13 Oct 2020 • Giulia Orrù, Davide Ghiani, Maura Pintor, Gian Luca Marcialis, Fabio Roli
We present a novel descriptor for crowd behavior analysis and anomaly detection.
no code implementations • 26 Aug 2021 • Yang Zheng, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Maura Pintor, Battista Biggio, Fabio Roli
Adversarial reprogramming allows repurposing a machine-learning model to perform a different task.
no code implementations • 10 Aug 2022 • Giorgio Piras, Maura Pintor, Luca Demetrio, Battista Biggio
One of the most common causes of lack of continuity of online systems stems from a widely popular Cyber Attack known as Distributed Denial of Service (DDoS), in which a network of infected devices (botnet) gets exploited to flood the computational capacity of services through the commands of an attacker.
no code implementations • 12 Dec 2022 • Ambra Demontis, Maura Pintor, Luca Demetrio, Kathrin Grosse, Hsiao-Ying Lin, Chengfang Fang, Battista Biggio, Fabio Roli
Reinforcement learning allows machines to learn from their own experience.
no code implementations • 1 Jul 2023 • Dario Lazzaro, Antonio Emanuele Cinà, Maura Pintor, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo
Deep learning models undergo a significant increase in the number of parameters they possess, leading to the execution of a larger number of operations during inference.
no code implementations • 12 Oct 2023 • Giorgio Piras, Maura Pintor, Ambra Demontis, Battista Biggio
Neural network pruning has shown to be an effective technique for reducing the network size, trading desirable properties like generalization and robustness to adversarial attacks for higher sparsity.
no code implementations • 27 Feb 2024 • Daniele Angioni, Luca Demetrio, Maura Pintor, Luca Oneto, Davide Anguita, Battista Biggio, Fabio Roli
In this work, we show that this problem also affects robustness to adversarial examples, thereby hindering the development of secure model update practices.