Search Results for author: Battista Biggio

Found 64 papers, 25 papers with code

Living-off-The-Land Reverse-Shell Detection by Informed Data Augmentation

no code implementations28 Feb 2024 Dmitrijs Trizna, Luca Demetrio, Battista Biggio, Fabio Roli

The living-off-the-land (LOTL) offensive methodologies rely on the perpetration of malicious actions through chains of commands executed by legitimate applications, identifiable exclusively by analysis of system logs.

Data Augmentation

Robustness-Congruent Adversarial Training for Secure Machine Learning Model Updates

no code implementations27 Feb 2024 Daniele Angioni, Luca Demetrio, Maura Pintor, Luca Oneto, Davide Anguita, Battista Biggio, Fabio Roli

In this work, we show that this problem also affects robustness to adversarial examples, thereby hindering the development of secure model update practices.

Adversarial Robustness regression

Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks

no code implementations12 Oct 2023 Giorgio Piras, Maura Pintor, Ambra Demontis, Battista Biggio

Neural network pruning has shown to be an effective technique for reducing the network size, trading desirable properties like generalization and robustness to adversarial attacks for higher sparsity.

Network Pruning

Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors

1 code implementation4 Oct 2023 Biagio Montaruli, Luca Demetrio, Maura Pintor, Luca Compagna, Davide Balzarotti, Battista Biggio

Machine-learning phishing webpage detectors (ML-PWD) have been shown to suffer from adversarial manipulations of the HTML code of the input webpage.

Nebula: Self-Attention for Dynamic Malware Analysis

1 code implementation19 Sep 2023 Dmitrijs Trizna, Luca Demetrio, Battista Biggio, Fabio Roli

Dynamic analysis enables detecting Windows malware by executing programs in a controlled environment, and storing their actions in log reports.

Malware Analysis Malware Detection

Adversarial Attacks Against Uncertainty Quantification

no code implementations19 Sep 2023 Emanuele Ledda, Daniele Angioni, Giorgio Piras, Giorgio Fumera, Battista Biggio, Fabio Roli

Machine-learning models can be fooled by adversarial examples, i. e., carefully-crafted input perturbations that force models to output wrong predictions.

Semantic Segmentation Uncertainty Quantification

Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks

no code implementations13 Sep 2023 Yang Zheng, Luca Demetrio, Antonio Emanuele Cinà, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Battista Biggio, Fabio Roli

We empirically show that this defense improves the performances of RGB-D systems against adversarial examples even when they are computed ad-hoc to circumvent this detection mechanism, and that is also more effective than adversarial training.

Object Recognition

Adversarial ModSecurity: Countering Adversarial SQL Injections with Robust Machine Learning

no code implementations9 Aug 2023 Biagio Montaruli, Luca Demetrio, Andrea Valenza, Luca Compagna, Davide Ariu, Luca Piras, Davide Balzarotti, Battista Biggio

To overcome these issues, we design a robust machine learning model, named AdvModSec, which uses the CRS rules as input features, and it is trained to detect adversarial SQLi attacks.

Adversarial Robustness

Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training

no code implementations1 Jul 2023 Dario Lazzaro, Antonio Emanuele Cinà, Maura Pintor, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Deep learning models undergo a significant increase in the number of parameters they possess, leading to the execution of a larger number of operations during inference.

Explaining Machine Learning DGA Detectors from DNS Traffic Data

no code implementations10 Aug 2022 Giorgio Piras, Maura Pintor, Luca Demetrio, Battista Biggio

One of the most common causes of lack of continuity of online systems stems from a widely popular Cyber Attack known as Distributed Denial of Service (DDoS), in which a network of infected devices (botnet) gets exploited to flood the computational capacity of services through the commands of an attacker.

Decision Making

Practical Attacks on Machine Learning: A Case Study on Adversarial Windows Malware

no code implementations12 Jul 2022 Luca Demetrio, Battista Biggio, Fabio Roli

While machine learning is vulnerable to adversarial examples, it still lacks systematic procedures and tools for evaluating its security in different application contexts.

BIG-bench Machine Learning Malware Detection

Machine Learning Security in Industry: A Quantitative Survey

no code implementations11 Jul 2022 Kathrin Grosse, Lukas Bieringer, Tarek Richard Besold, Battista Biggio, Katharina Krombholz

Despite the large body of academic work on machine learning security, little is known about the occurrence of attacks on machine learning systems in the wild.

BIG-bench Machine Learning Decision Making

Support Vector Machines under Adversarial Label Contamination

no code implementations1 Jun 2022 Huang Xiao, Battista Biggio, Blaine Nelson, Han Xiao, Claudia Eckert, Fabio Roli

Machine learning algorithms are increasingly being applied in security-related tasks such as spam and malware detection, although their security properties against deliberate attacks have not yet been widely understood.

Active Learning BIG-bench Machine Learning +1

Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep Object Detectors

1 code implementation26 May 2022 Avishag Shapira, Alon Zolfi, Luca Demetrio, Battista Biggio, Asaf Shabtai

Adversarial attacks against deep learning-based object detectors have been studied extensively in the past few years.

Autonomous Driving Object +2

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning

no code implementations4 May 2022 Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli

In this survey, we provide a comprehensive systematization of poisoning attacks and defenses in machine learning, reviewing more than 100 papers published in the field in the last 15 years.

BIG-bench Machine Learning Data Poisoning

Machine Learning Security against Data Poisoning: Are We There Yet?

1 code implementation12 Apr 2022 Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

The recent success of machine learning (ML) has been fueled by the increasing availability of computing power and large amounts of data in many different applications.

BIG-bench Machine Learning Data Poisoning

Energy-Latency Attacks via Sponge Poisoning

2 code implementations14 Mar 2022 Antonio Emanuele Cinà, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Sponge examples are test-time inputs carefully optimized to increase energy consumption and latency of neural networks when deployed on hardware accelerators.

Federated Learning

The Threat of Offensive AI to Organizations

no code implementations30 Jun 2021 Yisroel Mirsky, Ambra Demontis, Jaidip Kotak, Ram Shankar, Deng Gelei, Liu Yang, Xiangyu Zhang, Wenke Lee, Yuval Elovici, Battista Biggio

Although offensive AI has been discussed in the past, there is a need to analyze and understand the threat in the context of organizations.

Adversarial EXEmples: Functionality-preserving Optimization of Adversarial Windows Malware

no code implementations ICML Workshop AML 2021 Luca Demetrio, Battista Biggio, Giovanni Lagorio, Alessandro Armando, Fabio Roli

Windows malware classifiers that rely on static analysis have been proven vulnerable to adversarial EXEmples, i. e., malware samples carefully manipulated to evade detection.

Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions

1 code implementation14 Jun 2021 Antonio Emanuele Cinà, Kathrin Grosse, Sebastiano Vascon, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Backdoor attacks inject poisoning samples during training, with the goal of forcing a machine learning model to output an attacker-chosen class when presented a specific trigger at test time.

BIG-bench Machine Learning Incremental Learning

The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?

1 code implementation23 Mar 2021 Antonio Emanuele Cinà, Sebastiano Vascon, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

One of the most concerning threats for modern AI systems is data poisoning, where the attacker injects maliciously crafted training data to corrupt the system's behavior at test time.

Bilevel Optimization Data Poisoning

Poisoning Attacks on Cyber Attack Detectors for Industrial Control Systems

no code implementations23 Dec 2020 Moshe Kravchik, Battista Biggio, Asaf Shabtai

With this research, we are the first to demonstrate such poisoning attacks on ICS cyber attack online NN detectors.

FADER: Fast Adversarial Example Rejection

no code implementations18 Oct 2020 Francesco Crecchi, Marco Melis, Angelo Sotgiu, Davide Bacciu, Battista Biggio

As a second main contribution of this work, we introduce FADER, a novel technique for speeding up detection-based methods.

Adversarial Robustness

Adversarial EXEmples: A Survey and Experimental Evaluation of Practical Attacks on Machine Learning for Windows Malware Detection

2 code implementations17 Aug 2020 Luca Demetrio, Scott E. Coull, Battista Biggio, Giovanni Lagorio, Alessandro Armando, Fabio Roli

Recent work has shown that adversarial Windows malware samples - referred to as adversarial EXEmples in this paper - can bypass machine learning-based detection relying on static code analysis by perturbing relatively few input bytes.

BIG-bench Machine Learning Malware Detection

Backdoor Smoothing: Demystifying Backdoor Attacks on Deep Neural Networks

no code implementations11 Jun 2020 Kathrin Grosse, Taesung Lee, Battista Biggio, Youngja Park, Michael Backes, Ian Molloy

Backdoor attacks mislead machine-learning models to output an attacker-specified class when presented a specific trigger at test time.

Domain Knowledge Alleviates Adversarial Attacks in Multi-Label Classifiers

no code implementations6 Jun 2020 Stefano Melacci, Gabriele Ciravegna, Angelo Sotgiu, Ambra Demontis, Battista Biggio, Marco Gori, Fabio Roli

Adversarial attacks on machine learning-based classifiers, along with defense mechanisms, have been widely studied in the context of single-label classification problems.

Multi-Label Classification

Adversarial Feature Selection against Evasion Attacks

1 code implementation25 May 2020 Fei Zhang, Patrick P. K. Chan, Battista Biggio, Daniel S. Yeung, Fabio Roli

Pattern recognition and machine learning techniques have been increasingly adopted in adversarial settings such as spam, intrusion and malware detection, although their security against well-crafted attacks that aim to evade detection by manipulating data at test time has not yet been thoroughly assessed.

feature selection Malware Detection

Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?

no code implementations4 May 2020 Marco Melis, Michele Scalas, Ambra Demontis, Davide Maiorca, Battista Biggio, Giorgio Giacinto, Fabio Roli

While machine-learning algorithms have demonstrated a strong ability in detecting Android malware, they can be evaded by sparse evasion attacks crafted by injecting a small set of fake components, e. g., permissions and system calls, without compromising intrusive functionality.

Adversarial Robustness Android Malware Detection +1

Poisoning Attacks on Algorithmic Fairness

1 code implementation15 Apr 2020 David Solans, Battista Biggio, Carlos Castillo

Research in adversarial machine learning has shown how the performance of machine learning models can be seriously compromised by injecting even a small fraction of poisoning points into the training data.

BIG-bench Machine Learning Fairness

Functionality-preserving Black-box Optimization of Adversarial Windows Malware

2 code implementations30 Mar 2020 Luca Demetrio, Battista Biggio, Giovanni Lagorio, Fabio Roli, Alessandro Armando

Windows malware detectors based on machine learning are vulnerable to adversarial examples, even if the attacker is only given black-box query access to the model.

Cryptography and Security

Deep Neural Rejection against Adversarial Examples

1 code implementation1 Oct 2019 Angelo Sotgiu, Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Xiaoyi Feng, Fabio Roli

Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i. e., input samples that are carefully perturbed to cause misclassification at test time.

Towards Quality Assurance of Software Product Lines with Adversarial Configurations

1 code implementation16 Sep 2019 Paul Temple, Mathieu Acher, Gilles Perrouin, Battista Biggio, Jean-marc Jezequel, Fabio Roli

Software product line (SPL) engineers put a lot of effort to ensure that, through the setting of a large number of possible configuration options, products are acceptable and well-tailored to customers' needs.

BIG-bench Machine Learning

Detecting Adversarial Examples through Nonlinear Dimensionality Reduction

1 code implementation30 Apr 2019 Francesco Crecchi, Davide Bacciu, Battista Biggio

Deep neural networks are vulnerable to adversarial examples, i. e., carefully-perturbed inputs aimed to mislead classification.

Density Estimation Dimensionality Reduction +1

Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries

2 code implementations11 Jan 2019 Luca Demetrio, Battista Biggio, Giovanni Lagorio, Fabio Roli, Alessandro Armando

Based on this finding, we propose a novel attack algorithm that generates adversarial malware binaries by only changing few tens of bytes in the file header.

Cryptography and Security

Poisoning Behavioral Malware Clustering

no code implementations25 Nov 2018 Battista Biggio, Konrad Rieck, Davide Ariu, Christian Wressnegger, Igino Corona, Giorgio Giacinto, Fabio Roli

Clustering algorithms have become a popular tool in computer security to analyze the behavior of malware variants, identify novel malware families, and generate signatures for antivirus systems.

Clustering Computer Security +1

Is Data Clustering in Adversarial Settings Secure?

no code implementations25 Nov 2018 Battista Biggio, Ignazio Pillai, Samuel Rota Bulò, Davide Ariu, Marcello Pelillo, Fabio Roli

In this work we propose a general framework that allows one to identify potential attacks against clustering algorithms, and to evaluate their impact, by making specific assumptions on the adversary's goal, knowledge of the attacked system, and capabilities of manipulating the input data.

Clustering

Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks

no code implementations8 Sep 2018 Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli

Transferability captures the ability of an attack against a machine-learning model to be effective against a different, potentially unknown, model.

Towards Adversarial Configurations for Software Product Lines

no code implementations30 May 2018 Paul Temple, Mathieu Acher, Battista Biggio, Jean-Marc Jézéquel, Fabio Roli

Ensuring that all supposedly valid configurations of a software product line (SPL) lead to well-formed and acceptable products is challenging since it is most of the time impractical to enumerate and test all individual products of an SPL.

BIG-bench Machine Learning valid

Is feature selection secure against training data poisoning?

no code implementations21 Apr 2018 Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, Fabio Roli

Learning in adversarial settings is becoming an important task for application domains where attackers may inject malicious data into the training set to subvert normal operation of data-driven technologies.

Computational Efficiency Data Poisoning +2

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning

1 code implementation1 Apr 2018 Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, Bo Li

As machine learning becomes widely used for automated decisions, attackers have strong incentives to manipulate the results and models generated by machine learning algorithms.

BIG-bench Machine Learning regression

Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables

1 code implementation12 Mar 2018 Bojan Kolosnjaji, Ambra Demontis, Battista Biggio, Davide Maiorca, Giorgio Giacinto, Claudia Eckert, Fabio Roli

Machine-learning methods have already been exploited as useful tools for detecting malicious executable files.

Cryptography and Security

Explaining Black-box Android Malware Detection

no code implementations9 Mar 2018 Marco Melis, Davide Maiorca, Battista Biggio, Giorgio Giacinto, Fabio Roli

In this work, we generalize this approach to any black-box machine- learning model, by leveraging a gradient-based approach to identify the most influential local features.

Android Malware Detection BIG-bench Machine Learning +1

Super-sparse Learning in Similarity Spaces

no code implementations17 Dec 2017 Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Fabio Roli

In several applications, input samples are more naturally represented in terms of similarities between each other, rather than in terms of feature vectors.

General Classification Sparse Learning

Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

no code implementations8 Dec 2017 Battista Biggio, Fabio Roli

In this work, we provide a thorough overview of the evolution of this research area over the last ten years and beyond, starting from pioneering, earlier work on the security of non-deep learning algorithms up to more recent work aimed to understand the security properties of deep learning algorithms, in the context of computer vision and cybersecurity tasks.

BIG-bench Machine Learning Misconceptions

Security Evaluation of Pattern Classifiers under Attack

no code implementations2 Sep 2017 Battista Biggio, Giorgio Fumera, Fabio Roli

We propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and give examples of its use in three real applications.

Classification General Classification +1

On Security and Sparsity of Linear Classifiers for Adversarial Settings

no code implementations31 Aug 2017 Ambra Demontis, Paolo Russu, Battista Biggio, Giorgio Fumera, Fabio Roli

However, in such settings, they have been shown to be vulnerable to adversarial attacks, including the deliberate manipulation of data at test time to evade detection.

Malware Detection

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

no code implementations29 Aug 2017 Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, Fabio Roli

This exposes learning algorithms to the threat of data poisoning, i. e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process.

Data Poisoning Handwritten Digit Recognition +1

Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

no code implementations23 Aug 2017 Marco Melis, Ambra Demontis, Battista Biggio, Gavin Brown, Giorgio Fumera, Fabio Roli

Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains.

General Classification

Evasion Attacks against Machine Learning at Test Time

1 code implementation21 Aug 2017 Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic, Pavel Laskov, Giorgio Giacinto, Fabio Roli

In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data.

BIG-bench Machine Learning Malware Detection +1

Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection

no code implementations28 Apr 2017 Ambra Demontis, Marco Melis, Battista Biggio, Davide Maiorca, Daniel Arp, Konrad Rieck, Igino Corona, Giorgio Giacinto, Fabio Roli

To cope with the increasing variability and sophistication of modern attacks, machine learning has been widely adopted as a statistically-sound tool for malware detection.

Cryptography and Security

AdversariaLib: An Open-source Library for the Security Evaluation of Machine Learning Algorithms Under Attack

no code implementations15 Nov 2016 Igino Corona, Battista Biggio, Davide Maiorca

We present AdversariaLib, an open-source python library for the security evaluation of machine learning (ML) against carefully-targeted attacks.

BIG-bench Machine Learning General Classification

Statistical Meta-Analysis of Presentation Attacks for Secure Multibiometric Systems

no code implementations6 Sep 2016 Battista Biggio, Giorgio Fumera, Gian Luca Marcialis, Fabio Roli

Prior work has shown that multibiometric systems are vulnerable to presentation attacks, assuming that their matching score distribution is identical to that of genuine users, without fabricating any fake trait.

Randomized Prediction Games for Adversarial Machine Learning

no code implementations3 Sep 2016 Samuel Rota Bulò, Battista Biggio, Ignazio Pillai, Marcello Pelillo, Fabio Roli

In spam and malware detection, attackers exploit randomization to obfuscate malicious data and increase their chances of evading detection at test time; e. g., malware code is typically obfuscated using random strings or byte sequences to hide known exploits.

BIG-bench Machine Learning General Classification +2

Security Evaluation of Support Vector Machines in Adversarial Environments

no code implementations30 Jan 2014 Battista Biggio, Igino Corona, Blaine Nelson, Benjamin I. P. Rubinstein, Davide Maiorca, Giorgio Fumera, Giorgio Giacinto, and Fabio Roli

Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering.

Intrusion Detection Malware Detection

Poisoning Attacks against Support Vector Machines

1 code implementation27 Jun 2012 Battista Biggio, Blaine Nelson, Pavel Laskov

Such attacks inject specially crafted training data that increases the SVM's test error.

Cannot find the paper you are looking for? You can Submit a new open access paper.