1 code implementation • 2 Sep 2024 • Giorgio Piras, Maura Pintor, Ambra Demontis, Battista Biggio, Giorgio Giacinto, Fabio Roli
Recent work has proposed neural network pruning techniques to reduce the size of a network while preserving robustness against adversarial examples, i. e., well-crafted inputs inducing a misclassification.
1 code implementation • 11 Jul 2024 • Raffaele Mura, Giuseppe Floris, Luca Scionis, Giorgio Piras, Maura Pintor, Ambra Demontis, Giorgio Giacinto, Battista Biggio, Fabio Roli
Gradient-based attacks are a primary tool to evaluate robustness of machine-learning models.
no code implementations • 9 Jul 2024 • Lu Zhang, Sangarapillai Lambotharan, Gan Zheng, Guisheng Liao, Ambra Demontis, Fabio Roli
Motivated by the superior performance of deep learning in many applications including computer vision and natural language processing, several recent studies have focused on applying deep neural network for devising future generations of wireless networks.
no code implementations • 14 Jun 2024 • Zhang Chen, Luca Demetrio, Srishti Gupta, Xiaoyi Feng, Zhaoqiang Xia, Antonio Emanuele Cinà, Maura Pintor, Luca Oneto, Ambra Demontis, Battista Biggio, Fabio Roli
Relevant literature has claimed contradictory remarks in support of and against the robustness of over-parameterized networks.
no code implementations • 30 Apr 2024 • Antonio Emanuele Cinà, Jérôme Rony, Maura Pintor, Luca Demetrio, Ambra Demontis, Battista Biggio, Ismail Ben Ayed, Fabio Roli
While novel attacks are continuously proposed, each is shown to outperform its predecessors using different experimental setups, hyperparameter settings, and number of forward and backward calls to the target models.
no code implementations • 12 Oct 2023 • Giorgio Piras, Maura Pintor, Ambra Demontis, Battista Biggio
Neural network pruning has shown to be an effective technique for reducing the network size, trading desirable properties like generalization and robustness to adversarial attacks for higher sparsity.
1 code implementation • 12 Oct 2023 • Giuseppe Floris, Raffaele Mura, Luca Scionis, Giorgio Piras, Maura Pintor, Ambra Demontis, Battista Biggio
Evaluating the adversarial robustness of machine learning models using gradient-based attacks is challenging.
no code implementations • 13 Sep 2023 • Yang Zheng, Luca Demetrio, Antonio Emanuele Cinà, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Battista Biggio, Fabio Roli
We empirically show that this defense improves the performances of RGB-D systems against adversarial examples even when they are computed ad-hoc to circumvent this detection mechanism, and that is also more effective than adversarial training.
no code implementations • 1 Jul 2023 • Dario Lazzaro, Antonio Emanuele Cinà, Maura Pintor, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo
Deep learning models undergo a significant increase in the number of parameters they possess, leading to the execution of a larger number of operations during inference.
no code implementations • 12 Dec 2022 • Ambra Demontis, Maura Pintor, Luca Demetrio, Kathrin Grosse, Hsiao-Ying Lin, Chengfang Fang, Battista Biggio, Fabio Roli
Reinforcement learning allows machines to learn from their own experience.
no code implementations • 4 May 2022 • Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli
In this survey, we provide a comprehensive systematization of poisoning attacks and defenses in machine learning, reviewing more than 100 papers published in the field in the last 15 years.
1 code implementation • 12 Apr 2022 • Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo
The recent success of machine learning (ML) has been fueled by the increasing availability of computing power and large amounts of data in many different applications.
2 code implementations • 14 Mar 2022 • Antonio Emanuele Cinà, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo
Sponge examples are test-time inputs carefully optimized to increase energy consumption and latency of neural networks when deployed on hardware accelerators.
1 code implementation • 7 Mar 2022 • Maura Pintor, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, Fabio Roli
We showcase the usefulness of this dataset by testing the effectiveness of the computed patches against 127 models.
no code implementations • 26 Aug 2021 • Yang Zheng, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Maura Pintor, Battista Biggio, Fabio Roli
Adversarial reprogramming allows repurposing a machine-learning model to perform a different task.
no code implementations • 30 Jun 2021 • Yisroel Mirsky, Ambra Demontis, Jaidip Kotak, Ram Shankar, Deng Gelei, Liu Yang, Xiangyu Zhang, Wenke Lee, Yuval Elovici, Battista Biggio
Although offensive AI has been discussed in the past, there is a need to analyze and understand the threat in the context of organizations.
2 code implementations • ICML Workshop AML 2021 • Maura Pintor, Luca Demetrio, Angelo Sotgiu, Ambra Demontis, Nicholas Carlini, Battista Biggio, Fabio Roli
Evaluating robustness of machine-learning models to adversarial examples is a challenging problem.
1 code implementation • 14 Jun 2021 • Antonio Emanuele Cinà, Kathrin Grosse, Sebastiano Vascon, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo
Backdoor attacks inject poisoning samples during training, with the goal of forcing a machine learning model to output an attacker-chosen class when presented a specific trigger at test time.
1 code implementation • 2 May 2021 • Xinglong Chang, Katharina Dost, Kaiqi Zhao, Ambra Demontis, Fabio Roli, Gill Dobbie, Jörg Wicker
Applicability Domain defines a domain based on the known compounds and rejects any unknown compound that falls outside the domain.
1 code implementation • 23 Mar 2021 • Antonio Emanuele Cinà, Sebastiano Vascon, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo
One of the most concerning threats for modern AI systems is data poisoning, where the attacker injects maliciously crafted training data to corrupt the system's behavior at test time.
no code implementations • 6 Jun 2020 • Stefano Melacci, Gabriele Ciravegna, Angelo Sotgiu, Ambra Demontis, Battista Biggio, Marco Gori, Fabio Roli
Adversarial attacks on machine learning-based classifiers, along with defense mechanisms, have been widely studied in the context of single-label classification problems.
no code implementations • 4 May 2020 • Marco Melis, Michele Scalas, Ambra Demontis, Davide Maiorca, Battista Biggio, Giorgio Giacinto, Fabio Roli
While machine-learning algorithms have demonstrated a strong ability in detecting Android malware, they can be evaded by sparse evasion attacks crafted by injecting a small set of fake components, e. g., permissions and system calls, without compromising intrusive functionality.
2 code implementations • 20 Dec 2019 • Maura Pintor, Luca Demetrio, Angelo Sotgiu, Marco Melis, Ambra Demontis, Battista Biggio
We present \texttt{secml}, an open-source Python library for secure and explainable machine learning.
1 code implementation • 1 Oct 2019 • Angelo Sotgiu, Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Xiaoyi Feng, Fabio Roli
Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i. e., input samples that are carefully perturbed to cause misclassification at test time.
no code implementations • 8 Sep 2018 • Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli
Transferability captures the ability of an attack against a machine-learning model to be effective against a different, potentially unknown, model.
1 code implementation • 12 Mar 2018 • Bojan Kolosnjaji, Ambra Demontis, Battista Biggio, Davide Maiorca, Giorgio Giacinto, Claudia Eckert, Fabio Roli
Machine-learning methods have already been exploited as useful tools for detecting malicious executable files.
Cryptography and Security
no code implementations • 17 Dec 2017 • Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Fabio Roli
In several applications, input samples are more naturally represented in terms of similarities between each other, rather than in terms of feature vectors.
no code implementations • 31 Aug 2017 • Ambra Demontis, Paolo Russu, Battista Biggio, Giorgio Fumera, Fabio Roli
However, in such settings, they have been shown to be vulnerable to adversarial attacks, including the deliberate manipulation of data at test time to evade detection.
no code implementations • 29 Aug 2017 • Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, Fabio Roli
This exposes learning algorithms to the threat of data poisoning, i. e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process.
no code implementations • 23 Aug 2017 • Marco Melis, Ambra Demontis, Battista Biggio, Gavin Brown, Giorgio Fumera, Fabio Roli
Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains.
no code implementations • 28 Apr 2017 • Ambra Demontis, Marco Melis, Battista Biggio, Davide Maiorca, Daniel Arp, Konrad Rieck, Igino Corona, Giorgio Giacinto, Fabio Roli
To cope with the increasing variability and sophistication of modern attacks, machine learning has been widely adopted as a statistically-sound tool for malware detection.
Cryptography and Security