1 code implementation • 29 Oct 2024 • Emanuele Ledda, Giovanni Scodeller, Daniele Angioni, Giorgio Piras, Antonio Emanuele Cinà, Giorgio Fumera, Battista Biggio, Fabio Roli
In learning problems, the noise inherent to the task at hand hinders the possibility to infer without a certain degree of uncertainty.
1 code implementation • 2 Sep 2024 • Giorgio Piras, Maura Pintor, Ambra Demontis, Battista Biggio, Giorgio Giacinto, Fabio Roli
Recent work has proposed neural network pruning techniques to reduce the size of a network while preserving robustness against adversarial examples, i. e., well-crafted inputs inducing a misclassification.
no code implementations • 14 Aug 2024 • Francesco Villani, Dario Lazzaro, Antonio Emanuele Cinà, Matteo Dell'Amico, Battista Biggio, Fabio Roli
Data poisoning attacks on clustering algorithms have received limited attention, with existing methods struggling to scale efficiently as dataset sizes and feature counts increase.
1 code implementation • 11 Jul 2024 • Raffaele Mura, Giuseppe Floris, Luca Scionis, Giorgio Piras, Maura Pintor, Ambra Demontis, Giorgio Giacinto, Battista Biggio, Fabio Roli
Gradient-based attacks are a primary tool to evaluate robustness of machine-learning models.
2 code implementations • 19 Jun 2024 • Christian Scano, Giuseppe Floris, Biagio Montaruli, Luca Demetrio, Andrea Valenza, Luca Compagna, Davide Ariu, Luca Piras, Davide Balzarotti, Battista Biggio
However, we argue that this strategy is largely ineffective against web attacks, as detection is only based on heuristics and not customized on the application to protect.
no code implementations • 14 Jun 2024 • Zhang Chen, Luca Demetrio, Srishti Gupta, Xiaoyi Feng, Zhaoqiang Xia, Antonio Emanuele Cinà, Maura Pintor, Luca Oneto, Ambra Demontis, Battista Biggio, Fabio Roli
Relevant literature has claimed contradictory remarks in support of and against the robustness of over-parameterized networks.
no code implementations • 23 May 2024 • Andrea Ponte, Dmitrijs Trizna, Luca Demetrio, Battista Biggio, Ivan Tesfai Ogbu, Fabio Roli
As a result of decades of research, Windows malware detection is approached through a plethora of techniques.
no code implementations • 1 May 2024 • Daniel Gibert, Luca Demetrio, Giulio Zizzo, Quan Le, Jordi Planes, Battista Biggio
As a consequence, the injected content is confined to an integer number of chunks without tampering the other chunks containing the real bytes of the input examples, allowing us to extend our certified robustness guarantees to content insertion attacks.
no code implementations • 30 Apr 2024 • Antonio Emanuele Cinà, Jérôme Rony, Maura Pintor, Luca Demetrio, Ambra Demontis, Battista Biggio, Ismail Ben Ayed, Fabio Roli
While novel attacks are continuously proposed, each is shown to outperform its predecessors using different experimental setups, hyperparameter settings, and number of forward and backward calls to the target models.
no code implementations • 28 Feb 2024 • Dmitrijs Trizna, Luca Demetrio, Battista Biggio, Fabio Roli
Living-off-the-land (LOTL) techniques pose a significant challenge to security operations, exploiting legitimate tools to execute malicious commands that evade traditional detection methods.
no code implementations • 27 Feb 2024 • Daniele Angioni, Luca Demetrio, Maura Pintor, Luca Oneto, Davide Anguita, Battista Biggio, Fabio Roli
In this work, we show that this problem also affects robustness to adversarial examples, thereby hindering the development of secure model update practices.
2 code implementations • 2 Feb 2024 • Antonio Emanuele Cinà, Francesco Villani, Maura Pintor, Lea Schönherr, Battista Biggio, Marcello Pelillo
Evaluating the adversarial robustness of deep networks to gradient-based attacks is challenging.
no code implementations • 12 Oct 2023 • Giorgio Piras, Maura Pintor, Ambra Demontis, Battista Biggio
Neural network pruning has shown to be an effective technique for reducing the network size, trading desirable properties like generalization and robustness to adversarial attacks for higher sparsity.
1 code implementation • 12 Oct 2023 • Giuseppe Floris, Raffaele Mura, Luca Scionis, Giorgio Piras, Maura Pintor, Ambra Demontis, Battista Biggio
Evaluating the adversarial robustness of machine learning models using gradient-based attacks is challenging.
1 code implementation • 4 Oct 2023 • Biagio Montaruli, Luca Demetrio, Maura Pintor, Luca Compagna, Davide Balzarotti, Battista Biggio
Machine-learning phishing webpage detectors (ML-PWD) have been shown to suffer from adversarial manipulations of the HTML code of the input webpage.
no code implementations • 19 Sep 2023 • Emanuele Ledda, Daniele Angioni, Giorgio Piras, Giorgio Fumera, Battista Biggio, Fabio Roli
Machine-learning models can be fooled by adversarial examples, i. e., carefully-crafted input perturbations that force models to output wrong predictions.
1 code implementation • 19 Sep 2023 • Dmitrijs Trizna, Luca Demetrio, Battista Biggio, Fabio Roli
Dynamic analysis enables detecting Windows malware by executing programs in a controlled environment and logging their actions.
no code implementations • 13 Sep 2023 • Yang Zheng, Luca Demetrio, Antonio Emanuele Cinà, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Battista Biggio, Fabio Roli
We empirically show that this defense improves the performances of RGB-D systems against adversarial examples even when they are computed ad-hoc to circumvent this detection mechanism, and that is also more effective than adversarial training.
1 code implementation • 9 Aug 2023 • Biagio Montaruli, Giuseppe Floris, Christian Scano, Luca Demetrio, Andrea Valenza, Luca Compagna, Davide Ariu, Luca Piras, Davide Balzarotti, Battista Biggio
Our experiments, conducted using the well-known open-source ModSecurity WAF equipped with the CRS rules, show that our approach, named ModSec-AdvLearn, can (i) increase the detection rate up to 30%, while retaining negligible false alarm rates and discarding up to 50% of the CRS rules; and (ii) improve robustness against adversarial SQLi attacks up to 85%, marking a significant stride toward designing more effective and robust WAFs.
no code implementations • 1 Jul 2023 • Dario Lazzaro, Antonio Emanuele Cinà, Maura Pintor, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo
Deep learning models undergo a significant increase in the number of parameters they possess, leading to the execution of a larger number of operations during inference.
no code implementations • 12 Dec 2022 • Ambra Demontis, Maura Pintor, Luca Demetrio, Kathrin Grosse, Hsiao-Ying Lin, Chengfang Fang, Battista Biggio, Fabio Roli
Reinforcement learning allows machines to learn from their own experience.
no code implementations • 10 Aug 2022 • Giorgio Piras, Maura Pintor, Luca Demetrio, Battista Biggio
One of the most common causes of lack of continuity of online systems stems from a widely popular Cyber Attack known as Distributed Denial of Service (DDoS), in which a network of infected devices (botnet) gets exploited to flood the computational capacity of services through the commands of an attacker.
no code implementations • 12 Jul 2022 • Luca Demetrio, Battista Biggio, Fabio Roli
While machine learning is vulnerable to adversarial examples, it still lacks systematic procedures and tools for evaluating its security in different application contexts.
no code implementations • 11 Jul 2022 • Kathrin Grosse, Lukas Bieringer, Tarek Richard Besold, Battista Biggio, Katharina Krombholz
Despite the large body of academic work on machine learning security, little is known about the occurrence of attacks on machine learning systems in the wild.
no code implementations • 1 Jun 2022 • Huang Xiao, Battista Biggio, Blaine Nelson, Han Xiao, Claudia Eckert, Fabio Roli
Machine learning algorithms are increasingly being applied in security-related tasks such as spam and malware detection, although their security properties against deliberate attacks have not yet been widely understood.
1 code implementation • 26 May 2022 • Avishag Shapira, Alon Zolfi, Luca Demetrio, Battista Biggio, Asaf Shabtai
Adversarial attacks against deep learning-based object detectors have been studied extensively in the past few years.
no code implementations • 4 May 2022 • Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli
In this survey, we provide a comprehensive systematization of poisoning attacks and defenses in machine learning, reviewing more than 100 papers published in the field in the last 15 years.
1 code implementation • 12 Apr 2022 • Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo
The recent success of machine learning (ML) has been fueled by the increasing availability of computing power and large amounts of data in many different applications.
2 code implementations • 14 Mar 2022 • Antonio Emanuele Cinà, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo
Sponge examples are test-time inputs carefully optimized to increase energy consumption and latency of neural networks when deployed on hardware accelerators.
1 code implementation • 7 Mar 2022 • Maura Pintor, Daniele Angioni, Angelo Sotgiu, Luca Demetrio, Ambra Demontis, Battista Biggio, Fabio Roli
We showcase the usefulness of this dataset by testing the effectiveness of the computed patches against 127 models.
no code implementations • 26 Aug 2021 • Yang Zheng, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Maura Pintor, Battista Biggio, Fabio Roli
Adversarial reprogramming allows repurposing a machine-learning model to perform a different task.
no code implementations • 30 Jun 2021 • Yisroel Mirsky, Ambra Demontis, Jaidip Kotak, Ram Shankar, Deng Gelei, Liu Yang, Xiangyu Zhang, Wenke Lee, Yuval Elovici, Battista Biggio
Although offensive AI has been discussed in the past, there is a need to analyze and understand the threat in the context of organizations.
no code implementations • ICML Workshop AML 2021 • Luca Demetrio, Battista Biggio, Giovanni Lagorio, Alessandro Armando, Fabio Roli
Windows malware classifiers that rely on static analysis have been proven vulnerable to adversarial EXEmples, i. e., malware samples carefully manipulated to evade detection.
2 code implementations • ICML Workshop AML 2021 • Maura Pintor, Luca Demetrio, Angelo Sotgiu, Ambra Demontis, Nicholas Carlini, Battista Biggio, Fabio Roli
Evaluating robustness of machine-learning models to adversarial examples is a challenging problem.
1 code implementation • 14 Jun 2021 • Antonio Emanuele Cinà, Kathrin Grosse, Sebastiano Vascon, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo
Backdoor attacks inject poisoning samples during training, with the goal of forcing a machine learning model to output an attacker-chosen class when presented a specific trigger at test time.
no code implementations • 8 May 2021 • Lukas Bieringer, Kathrin Grosse, Michael Backes, Battista Biggio, Katharina Krombholz
Our study reveals two \facets of practitioners' mental models of machine learning security.
1 code implementation • 23 Mar 2021 • Antonio Emanuele Cinà, Sebastiano Vascon, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo
One of the most concerning threats for modern AI systems is data poisoning, where the attacker injects maliciously crafted training data to corrupt the system's behavior at test time.
3 code implementations • NeurIPS 2021 • Maura Pintor, Fabio Roli, Wieland Brendel, Battista Biggio
Evaluating adversarial robustness amounts to finding the minimum perturbation needed to have an input sample misclassified.
no code implementations • 23 Dec 2020 • Moshe Kravchik, Battista Biggio, Asaf Shabtai
With this research, we are the first to demonstrate such poisoning attacks on ICS cyber attack online NN detectors.
no code implementations • 18 Oct 2020 • Francesco Crecchi, Marco Melis, Angelo Sotgiu, Davide Bacciu, Battista Biggio
As a second main contribution of this work, we introduce FADER, a novel technique for speeding up detection-based methods.
2 code implementations • 17 Aug 2020 • Luca Demetrio, Scott E. Coull, Battista Biggio, Giovanni Lagorio, Alessandro Armando, Fabio Roli
Recent work has shown that adversarial Windows malware samples - referred to as adversarial EXEmples in this paper - can bypass machine learning-based detection relying on static code analysis by perturbing relatively few input bytes.
no code implementations • 11 Jun 2020 • Kathrin Grosse, Taesung Lee, Battista Biggio, Youngja Park, Michael Backes, Ian Molloy
Backdoor attacks mislead machine-learning models to output an attacker-specified class when presented a specific trigger at test time.
no code implementations • 6 Jun 2020 • Stefano Melacci, Gabriele Ciravegna, Angelo Sotgiu, Ambra Demontis, Battista Biggio, Marco Gori, Fabio Roli
Adversarial attacks on machine learning-based classifiers, along with defense mechanisms, have been widely studied in the context of single-label classification problems.
1 code implementation • 25 May 2020 • Fei Zhang, Patrick P. K. Chan, Battista Biggio, Daniel S. Yeung, Fabio Roli
Pattern recognition and machine learning techniques have been increasingly adopted in adversarial settings such as spam, intrusion and malware detection, although their security against well-crafted attacks that aim to evade detection by manipulating data at test time has not yet been thoroughly assessed.
no code implementations • 4 May 2020 • Marco Melis, Michele Scalas, Ambra Demontis, Davide Maiorca, Battista Biggio, Giorgio Giacinto, Fabio Roli
While machine-learning algorithms have demonstrated a strong ability in detecting Android malware, they can be evaded by sparse evasion attacks crafted by injecting a small set of fake components, e. g., permissions and system calls, without compromising intrusive functionality.
1 code implementation • 15 Apr 2020 • David Solans, Battista Biggio, Carlos Castillo
Research in adversarial machine learning has shown how the performance of machine learning models can be seriously compromised by injecting even a small fraction of poisoning points into the training data.
2 code implementations • 30 Mar 2020 • Luca Demetrio, Battista Biggio, Giovanni Lagorio, Fabio Roli, Alessandro Armando
Windows malware detectors based on machine learning are vulnerable to adversarial examples, even if the attacker is only given black-box query access to the model.
Cryptography and Security
2 code implementations • 20 Dec 2019 • Maura Pintor, Luca Demetrio, Angelo Sotgiu, Marco Melis, Ambra Demontis, Battista Biggio
We present \texttt{secml}, an open-source Python library for secure and explainable machine learning.
1 code implementation • 1 Oct 2019 • Angelo Sotgiu, Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Xiaoyi Feng, Fabio Roli
Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i. e., input samples that are carefully perturbed to cause misclassification at test time.
1 code implementation • 16 Sep 2019 • Paul Temple, Mathieu Acher, Gilles Perrouin, Battista Biggio, Jean-marc Jezequel, Fabio Roli
Software product line (SPL) engineers put a lot of effort to ensure that, through the setting of a large number of possible configuration options, products are acceptable and well-tailored to customers' needs.
1 code implementation • 30 Apr 2019 • Francesco Crecchi, Davide Bacciu, Battista Biggio
Deep neural networks are vulnerable to adversarial examples, i. e., carefully-perturbed inputs aimed to mislead classification.
2 code implementations • 11 Jan 2019 • Luca Demetrio, Battista Biggio, Giovanni Lagorio, Fabio Roli, Alessandro Armando
Based on this finding, we propose a novel attack algorithm that generates adversarial malware binaries by only changing few tens of bytes in the file header.
Cryptography and Security
no code implementations • 25 Nov 2018 • Battista Biggio, Ignazio Pillai, Samuel Rota Bulò, Davide Ariu, Marcello Pelillo, Fabio Roli
In this work we propose a general framework that allows one to identify potential attacks against clustering algorithms, and to evaluate their impact, by making specific assumptions on the adversary's goal, knowledge of the attacked system, and capabilities of manipulating the input data.
no code implementations • 25 Nov 2018 • Battista Biggio, Konrad Rieck, Davide Ariu, Christian Wressnegger, Igino Corona, Giorgio Giacinto, Fabio Roli
Clustering algorithms have become a popular tool in computer security to analyze the behavior of malware variants, identify novel malware families, and generate signatures for antivirus systems.
no code implementations • 8 Sep 2018 • Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli
Transferability captures the ability of an attack against a machine-learning model to be effective against a different, potentially unknown, model.
no code implementations • 30 May 2018 • Paul Temple, Mathieu Acher, Battista Biggio, Jean-Marc Jézéquel, Fabio Roli
Ensuring that all supposedly valid configurations of a software product line (SPL) lead to well-formed and acceptable products is challenging since it is most of the time impractical to enumerate and test all individual products of an SPL.
no code implementations • 21 Apr 2018 • Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, Fabio Roli
Learning in adversarial settings is becoming an important task for application domains where attackers may inject malicious data into the training set to subvert normal operation of data-driven technologies.
1 code implementation • 1 Apr 2018 • Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, Bo Li
As machine learning becomes widely used for automated decisions, attackers have strong incentives to manipulate the results and models generated by machine learning algorithms.
1 code implementation • 12 Mar 2018 • Bojan Kolosnjaji, Ambra Demontis, Battista Biggio, Davide Maiorca, Giorgio Giacinto, Claudia Eckert, Fabio Roli
Machine-learning methods have already been exploited as useful tools for detecting malicious executable files.
Cryptography and Security
no code implementations • 9 Mar 2018 • Marco Melis, Davide Maiorca, Battista Biggio, Giorgio Giacinto, Fabio Roli
In this work, we generalize this approach to any black-box machine- learning model, by leveraging a gradient-based approach to identify the most influential local features.
no code implementations • 17 Dec 2017 • Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Fabio Roli
In several applications, input samples are more naturally represented in terms of similarities between each other, rather than in terms of feature vectors.
no code implementations • 8 Dec 2017 • Battista Biggio, Fabio Roli
In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks.
no code implementations • 2 Sep 2017 • Battista Biggio, Giorgio Fumera, Fabio Roli
We propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and give examples of its use in three real applications.
no code implementations • 31 Aug 2017 • Ambra Demontis, Paolo Russu, Battista Biggio, Giorgio Fumera, Fabio Roli
However, in such settings, they have been shown to be vulnerable to adversarial attacks, including the deliberate manipulation of data at test time to evade detection.
no code implementations • 29 Aug 2017 • Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, Fabio Roli
This exposes learning algorithms to the threat of data poisoning, i. e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process.
no code implementations • 23 Aug 2017 • Marco Melis, Ambra Demontis, Battista Biggio, Gavin Brown, Giorgio Fumera, Fabio Roli
Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains.
1 code implementation • 21 Aug 2017 • Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic, Pavel Laskov, Giorgio Giacinto, Fabio Roli
In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data.
no code implementations • 28 Apr 2017 • Ambra Demontis, Marco Melis, Battista Biggio, Davide Maiorca, Daniel Arp, Konrad Rieck, Igino Corona, Giorgio Giacinto, Fabio Roli
To cope with the increasing variability and sophistication of modern attacks, machine learning has been widely adopted as a statistically-sound tool for malware detection.
Cryptography and Security
no code implementations • 15 Nov 2016 • Igino Corona, Battista Biggio, Davide Maiorca
We present AdversariaLib, an open-source python library for the security evaluation of machine learning (ML) against carefully-targeted attacks.
no code implementations • 6 Sep 2016 • Battista Biggio, Giorgio Fumera, Gian Luca Marcialis, Fabio Roli
Prior work has shown that multibiometric systems are vulnerable to presentation attacks, assuming that their matching score distribution is identical to that of genuine users, without fabricating any fake trait.
no code implementations • 3 Sep 2016 • Samuel Rota Bulò, Battista Biggio, Ignazio Pillai, Marcello Pelillo, Fabio Roli
In spam and malware detection, attackers exploit randomization to obfuscate malicious data and increase their chances of evading detection at test time; e. g., malware code is typically obfuscated using random strings or byte sequences to hide known exploits.
no code implementations • 30 Jan 2014 • Battista Biggio, Igino Corona, Blaine Nelson, Benjamin I. P. Rubinstein, Davide Maiorca, Giorgio Fumera, Giorgio Giacinto, and Fabio Roli
Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering.
1 code implementation • 27 Jun 2012 • Battista Biggio, Blaine Nelson, Pavel Laskov
Such attacks inject specially crafted training data that increases the SVM's test error.