Search Results for author: Maura Pintor

Found 16 papers, 8 papers with code

Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes

1 code implementation NeurIPS Workshop ImageNet_PPF 2021 Utku Ozbulak, Maura Pintor, Arnout Van Messem, Wesley De Neve

We find that $71\%$ of the adversarial examples that achieve model-to-model adversarial transferability are misclassified into one of the top-5 classes predicted for the underlying source images.

Benchmarking

Raze to the Ground: Query-Efficient Adversarial HTML Attacks on Machine-Learning Phishing Webpage Detectors

1 code implementation4 Oct 2023 Biagio Montaruli, Luca Demetrio, Maura Pintor, Luca Compagna, Davide Balzarotti, Battista Biggio

Machine-learning phishing webpage detectors (ML-PWD) have been shown to suffer from adversarial manipulations of the HTML code of the input webpage.

Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks

no code implementations8 Sep 2018 Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli

Transferability captures the ability of an attack against a machine-learning model to be effective against a different, potentially unknown, model.

Explaining Machine Learning DGA Detectors from DNS Traffic Data

no code implementations10 Aug 2022 Giorgio Piras, Maura Pintor, Luca Demetrio, Battista Biggio

One of the most common causes of lack of continuity of online systems stems from a widely popular Cyber Attack known as Distributed Denial of Service (DDoS), in which a network of infected devices (botnet) gets exploited to flood the computational capacity of services through the commands of an attacker.

Decision Making

Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training

no code implementations1 Jul 2023 Dario Lazzaro, Antonio Emanuele Cinà, Maura Pintor, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Deep learning models undergo a significant increase in the number of parameters they possess, leading to the execution of a larger number of operations during inference.

Samples on Thin Ice: Re-Evaluating Adversarial Pruning of Neural Networks

no code implementations12 Oct 2023 Giorgio Piras, Maura Pintor, Ambra Demontis, Battista Biggio

Neural network pruning has shown to be an effective technique for reducing the network size, trading desirable properties like generalization and robustness to adversarial attacks for higher sparsity.

Network Pruning

Robustness-Congruent Adversarial Training for Secure Machine Learning Model Updates

no code implementations27 Feb 2024 Daniele Angioni, Luca Demetrio, Maura Pintor, Luca Oneto, Davide Anguita, Battista Biggio, Fabio Roli

In this work, we show that this problem also affects robustness to adversarial examples, thereby hindering the development of secure model update practices.

Adversarial Robustness regression

Cannot find the paper you are looking for? You can Submit a new open access paper.