Search Results for author: Antonio Emanuele Cinà

Found 14 papers, 6 papers with code

Sonic: Fast and Transferable Data Poisoning on Clustering Algorithms

no code implementations14 Aug 2024 Francesco Villani, Dario Lazzaro, Antonio Emanuele Cinà, Matteo Dell'Amico, Battista Biggio, Fabio Roli

Data poisoning attacks on clustering algorithms have received limited attention, with existing methods struggling to scale efficiently as dataset sizes and feature counts increase.

Clustering Data Poisoning

Understanding XAI Through the Philosopher's Lens: A Historical Perspective

no code implementations26 Jul 2024 Martina Mattioli, Antonio Emanuele Cinà, Marcello Pelillo

Despite explainable AI (XAI) has recently become a hot topic and several different approaches have been developed, there is still a widespread belief that it lacks a convincing unifying foundation.

Philosophy

AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples

no code implementations30 Apr 2024 Antonio Emanuele Cinà, Jérôme Rony, Maura Pintor, Luca Demetrio, Ambra Demontis, Battista Biggio, Ismail Ben Ayed, Fabio Roli

While novel attacks are continuously proposed, each is shown to outperform its predecessors using different experimental setups, hyperparameter settings, and number of forward and backward calls to the target models.

Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks

no code implementations13 Sep 2023 Yang Zheng, Luca Demetrio, Antonio Emanuele Cinà, Xiaoyi Feng, Zhaoqiang Xia, Xiaoyue Jiang, Ambra Demontis, Battista Biggio, Fabio Roli

We empirically show that this defense improves the performances of RGB-D systems against adversarial examples even when they are computed ad-hoc to circumvent this detection mechanism, and that is also more effective than adversarial training.

Object Recognition

Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training

no code implementations1 Jul 2023 Dario Lazzaro, Antonio Emanuele Cinà, Maura Pintor, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Deep learning models undergo a significant increase in the number of parameters they possess, leading to the execution of a larger number of operations during inference.

On the Limitations of Model Stealing with Uncertainty Quantification Models

no code implementations9 May 2023 David Pape, Sina Däubener, Thorsten Eisenhofer, Antonio Emanuele Cinà, Lea Schönherr

We realize that during training, the models tend to have similar predictions, indicating that the network diversity we wanted to leverage using uncertainty quantification models is not (high) enough for improvements on the model stealing task.

Diversity Uncertainty Quantification

Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning

no code implementations4 May 2022 Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Sebastiano Vascon, Werner Zellinger, Bernhard A. Moser, Alina Oprea, Battista Biggio, Marcello Pelillo, Fabio Roli

In this survey, we provide a comprehensive systematization of poisoning attacks and defenses in machine learning, reviewing more than 100 papers published in the field in the last 15 years.

BIG-bench Machine Learning Data Poisoning

Machine Learning Security against Data Poisoning: Are We There Yet?

1 code implementation12 Apr 2022 Antonio Emanuele Cinà, Kathrin Grosse, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

The recent success of machine learning (ML) has been fueled by the increasing availability of computing power and large amounts of data in many different applications.

BIG-bench Machine Learning Data Poisoning

Energy-Latency Attacks via Sponge Poisoning

2 code implementations14 Mar 2022 Antonio Emanuele Cinà, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Sponge examples are test-time inputs carefully optimized to increase energy consumption and latency of neural networks when deployed on hardware accelerators.

Federated Learning

Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions

1 code implementation14 Jun 2021 Antonio Emanuele Cinà, Kathrin Grosse, Sebastiano Vascon, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

Backdoor attacks inject poisoning samples during training, with the goal of forcing a machine learning model to output an attacker-chosen class when presented a specific trigger at test time.

BIG-bench Machine Learning Incremental Learning

The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?

1 code implementation23 Mar 2021 Antonio Emanuele Cinà, Sebastiano Vascon, Ambra Demontis, Battista Biggio, Fabio Roli, Marcello Pelillo

One of the most concerning threats for modern AI systems is data poisoning, where the attacker injects maliciously crafted training data to corrupt the system's behavior at test time.

Bilevel Optimization Data Poisoning

A black-box adversarial attack for poisoning clustering

1 code implementation9 Sep 2020 Antonio Emanuele Cinà, Alessandro Torcinovich, Marcello Pelillo

In an attempt to fill this gap, in this work, we propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.

Adversarial Attack Clustering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.