Search Results for author: Ambra Demontis

Found 26 papers, 11 papers with code

Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks

no code implementations12 Oct 2023 Giorgio Piras, Maura Pintor, Ambra Demontis, Battista Biggio

Neural network pruning has shown to be an effective technique for reducing the network size, trading desirable properties like generalization and robustness to adversarial attacks for higher sparsity.

Network Pruning

Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks

no code implementations13 Sep 2023 Yang Zheng, Luca Demetrio, Antonio Emanuele Cinà, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Battista Biggio, Fabio Roli

We empirically show that this defense improves the performances of RGB-D systems against adversarial examples even when they are computed ad-hoc to circumvent this detection mechanism, and that is also more effective than adversarial training.

Object Recognition

Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training

no code implementations1 Jul 2023 Dario Lazzaro, Antonio Emanuele Cinà, Maura Pintor, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Deep learning models undergo a significant increase in the number of parameters they possess, leading to the execution of a larger number of operations during inference.

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning

no code implementations4 May 2022 Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli

In this survey, we provide a comprehensive systematization of poisoning attacks and defenses in machine learning, reviewing more than 100 papers published in the field in the last 15 years.

BIG-bench Machine Learning Data Poisoning

Machine Learning Security against Data Poisoning: Are We There Yet?

1 code implementation12 Apr 2022 Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

The recent success of machine learning (ML) has been fueled by the increasing availability of computing power and large amounts of data in many different applications.

BIG-bench Machine Learning Data Poisoning

Energy-Latency Attacks via Sponge Poisoning

2 code implementations14 Mar 2022 Antonio Emanuele Cinà, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Sponge examples are test-time inputs carefully optimized to increase energy consumption and latency of neural networks when deployed on hardware accelerators.

Federated Learning

The Threat of Offensive AI to Organizations

no code implementations30 Jun 2021 Yisroel Mirsky, Ambra Demontis, Jaidip Kotak, Ram Shankar, Deng Gelei, Liu Yang, Xiangyu Zhang, Wenke Lee, Yuval Elovici, Battista Biggio

Although offensive AI has been discussed in the past, there is a need to analyze and understand the threat in the context of organizations.

Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions

1 code implementation14 Jun 2021 Antonio Emanuele Cinà, Kathrin Grosse, Sebastiano Vascon, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Backdoor attacks inject poisoning samples during training, with the goal of forcing a machine learning model to output an attacker-chosen class when presented a specific trigger at test time.

BIG-bench Machine Learning Incremental Learning

BAARD: Blocking Adversarial Examples by Testing for Applicability, Reliability and Decidability

1 code implementation2 May 2021 Xinglong Chang, Katharina Dost, Kaiqi Zhao, Ambra Demontis, Fabio Roli, Gill Dobbie, Jörg Wicker

Applicability Domain defines a domain based on the known compounds and rejects any unknown compound that falls outside the domain.

Blocking

The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?

1 code implementation23 Mar 2021 Antonio Emanuele Cinà, Sebastiano Vascon, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

One of the most concerning threats for modern AI systems is data poisoning, where the attacker injects maliciously crafted training data to corrupt the system's behavior at test time.

Bilevel Optimization Data Poisoning

Domain Knowledge Alleviates Adversarial Attacks in Multi-Label Classifiers

no code implementations6 Jun 2020 Stefano Melacci, Gabriele Ciravegna, Angelo Sotgiu, Ambra Demontis, Battista Biggio, Marco Gori, Fabio Roli

Adversarial attacks on machine learning-based classifiers, along with defense mechanisms, have been widely studied in the context of single-label classification problems.

Multi-Label Classification

Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?

no code implementations4 May 2020 Marco Melis, Michele Scalas, Ambra Demontis, Davide Maiorca, Battista Biggio, Giorgio Giacinto, Fabio Roli

While machine-learning algorithms have demonstrated a strong ability in detecting Android malware, they can be evaded by sparse evasion attacks crafted by injecting a small set of fake components, e. g., permissions and system calls, without compromising intrusive functionality.

Adversarial Robustness Android Malware Detection +1

Deep Neural Rejection against Adversarial Examples

1 code implementation1 Oct 2019 Angelo Sotgiu, Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Xiaoyi Feng, Fabio Roli

Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i. e., input samples that are carefully perturbed to cause misclassification at test time.

Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks

no code implementations8 Sep 2018 Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli

Transferability captures the ability of an attack against a machine-learning model to be effective against a different, potentially unknown, model.

Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables

1 code implementation12 Mar 2018 Bojan Kolosnjaji, Ambra Demontis, Battista Biggio, Davide Maiorca, Giorgio Giacinto, Claudia Eckert, Fabio Roli

Machine-learning methods have already been exploited as useful tools for detecting malicious executable files.

Cryptography and Security

Super-sparse Learning in Similarity Spaces

no code implementations17 Dec 2017 Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Fabio Roli

In several applications, input samples are more naturally represented in terms of similarities between each other, rather than in terms of feature vectors.

General Classification Sparse Learning

On Security and Sparsity of Linear Classifiers for Adversarial Settings

no code implementations31 Aug 2017 Ambra Demontis, Paolo Russu, Battista Biggio, Giorgio Fumera, Fabio Roli

However, in such settings, they have been shown to be vulnerable to adversarial attacks, including the deliberate manipulation of data at test time to evade detection.

Malware Detection

Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

no code implementations29 Aug 2017 Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, Fabio Roli

This exposes learning algorithms to the threat of data poisoning, i. e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process.

Data Poisoning Handwritten Digit Recognition +1

Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

no code implementations23 Aug 2017 Marco Melis, Ambra Demontis, Battista Biggio, Gavin Brown, Giorgio Fumera, Fabio Roli

Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains.

General Classification

Yes, Machine Learning Can Be More Secure! A Case Study on Android Malware Detection

no code implementations28 Apr 2017 Ambra Demontis, Marco Melis, Battista Biggio, Davide Maiorca, Daniel Arp, Konrad Rieck, Igino Corona, Giorgio Giacinto, Fabio Roli

To cope with the increasing variability and sophistication of modern attacks, machine learning has been widely adopted as a statistically-sound tool for malware detection.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.