Search Results for author: Giorgio Fumera

Found 11 papers, 2 papers with code

Adversarial Attacks Against Uncertainty Quantification

no code implementations19 Sep 2023 Emanuele Ledda, Daniele Angioni, Giorgio Piras, Giorgio Fumera, Battista Biggio, Fabio Roli

Machine-learning models can be fooled by adversarial examples, i. e., carefully-crafted input perturbations that force models to output wrong predictions.

Semantic Segmentation Uncertainty Quantification

Dropout Injection at Test Time for Post Hoc Uncertainty Quantification in Neural Networks

1 code implementation6 Feb 2023 Emanuele Ledda, Giorgio Fumera, Fabio Roli

Among Bayesian methods, Monte-Carlo dropout provides principled tools for evaluating the epistemic uncertainty of neural networks.

Crowd Counting Uncertainty Quantification

Deep Neural Rejection against Adversarial Examples

1 code implementation1 Oct 2019 Angelo Sotgiu, Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Xiaoyi Feng, Fabio Roli

Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i. e., input samples that are carefully perturbed to cause misclassification at test time.

Is feature selection secure against training data poisoning?

no code implementations21 Apr 2018 Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, Fabio Roli

Learning in adversarial settings is becoming an important task for application domains where attackers may inject malicious data into the training set to subvert normal operation of data-driven technologies.

Computational Efficiency Data Poisoning +2

Super-sparse Learning in Similarity Spaces

no code implementations17 Dec 2017 Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Fabio Roli

In several applications, input samples are more naturally represented in terms of similarities between each other, rather than in terms of feature vectors.

General Classification Sparse Learning

Security Evaluation of Pattern Classifiers under Attack

no code implementations2 Sep 2017 Battista Biggio, Giorgio Fumera, Fabio Roli

We propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and give examples of its use in three real applications.

Classification General Classification +1

On Security and Sparsity of Linear Classifiers for Adversarial Settings

no code implementations31 Aug 2017 Ambra Demontis, Paolo Russu, Battista Biggio, Giorgio Fumera, Fabio Roli

However, in such settings, they have been shown to be vulnerable to adversarial attacks, including the deliberate manipulation of data at test time to evade detection.

Malware Detection

Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid

no code implementations23 Aug 2017 Marco Melis, Ambra Demontis, Battista Biggio, Gavin Brown, Giorgio Fumera, Fabio Roli

Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains.

General Classification

Progressive Boosting for Class Imbalance

no code implementations5 Jun 2017 Roghayeh Soleymani, Eric Granger, Giorgio Fumera

Results show that PBoost can outperform state of the art techniques in terms of both accuracy and complexity over different levels of imbalance and overlap between classes.

Ensemble Learning

Statistical Meta-Analysis of Presentation Attacks for Secure Multibiometric Systems

no code implementations6 Sep 2016 Battista Biggio, Giorgio Fumera, Gian Luca Marcialis, Fabio Roli

Prior work has shown that multibiometric systems are vulnerable to presentation attacks, assuming that their matching score distribution is identical to that of genuine users, without fabricating any fake trait.

Security Evaluation of Support Vector Machines in Adversarial Environments

no code implementations30 Jan 2014 Battista Biggio, Igino Corona, Blaine Nelson, Benjamin I. P. Rubinstein, Davide Maiorca, Giorgio Fumera, Giorgio Giacinto, and Fabio Roli

Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering.

Intrusion Detection Malware Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.