Search Results for author: Maura Pintor

Found 8 papers, 5 papers with code

Evaluating Adversarial Attacks on ImageNet: A Reality Check on Misclassification Classes

1 code implementation NeurIPS Workshop ImageNet_PPF 2021 Utku Ozbulak, Maura Pintor, Arnout Van Messem, Wesley De Neve

We find that $71\%$ of the adversarial examples that achieve model-to-model adversarial transferability are misclassified into one of the top-5 classes predicted for the underlying source images.

Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples

2 code implementations ICML Workshop AML 2021 Maura Pintor, Luca Demetrio, Angelo Sotgiu, Giovanni Manca, Ambra Demontis, Nicholas Carlini, Battista Biggio, Fabio Roli

Although guidelines and best practices have been suggested to improve current adversarial robustness evaluations, the lack of automatic testing and debugging tools makes it difficult to apply these recommendations in a systematic manner.

Adversarial Robustness

secml: A Python Library for Secure and Explainable Machine Learning

2 code implementations20 Dec 2019 Maura Pintor, Luca Demetrio, Angelo Sotgiu, Marco Melis, Ambra Demontis, Battista Biggio

We present \texttt{secml}, an open-source Python library for secure and explainable machine learning.

Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks

no code implementations8 Sep 2018 Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli

Transferability captures the ability of an attack against a machine-learning model to be effective against a different, potentially unknown, model.

Cannot find the paper you are looking for? You can Submit a new open access paper.