1 code implementation • 29 Oct 2024 • Emanuele Ledda, Giovanni Scodeller, Daniele Angioni, Giorgio Piras, Antonio Emanuele Cinà, Giorgio Fumera, Battista Biggio, Fabio Roli
In learning problems, the noise inherent to the task at hand hinders the possibility to infer without a certain degree of uncertainty.
no code implementations • 19 Sep 2023 • Emanuele Ledda, Daniele Angioni, Giorgio Piras, Giorgio Fumera, Battista Biggio, Fabio Roli
Machine-learning models can be fooled by adversarial examples, i. e., carefully-crafted input perturbations that force models to output wrong predictions.
1 code implementation • 6 Feb 2023 • Emanuele Ledda, Giorgio Fumera, Fabio Roli
Among Bayesian methods, Monte-Carlo dropout provides principled tools for evaluating the epistemic uncertainty of neural networks.
1 code implementation • 1 Oct 2019 • Angelo Sotgiu, Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Xiaoyi Feng, Fabio Roli
Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i. e., input samples that are carefully perturbed to cause misclassification at test time.
no code implementations • 21 Apr 2018 • Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, Fabio Roli
Learning in adversarial settings is becoming an important task for application domains where attackers may inject malicious data into the training set to subvert normal operation of data-driven technologies.
no code implementations • 17 Dec 2017 • Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Fabio Roli
In several applications, input samples are more naturally represented in terms of similarities between each other, rather than in terms of feature vectors.
no code implementations • 2 Sep 2017 • Battista Biggio, Giorgio Fumera, Fabio Roli
We propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and give examples of its use in three real applications.
no code implementations • 31 Aug 2017 • Ambra Demontis, Paolo Russu, Battista Biggio, Giorgio Fumera, Fabio Roli
However, in such settings, they have been shown to be vulnerable to adversarial attacks, including the deliberate manipulation of data at test time to evade detection.
no code implementations • 23 Aug 2017 • Marco Melis, Ambra Demontis, Battista Biggio, Gavin Brown, Giorgio Fumera, Fabio Roli
Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains.
no code implementations • 5 Jun 2017 • Roghayeh Soleymani, Eric Granger, Giorgio Fumera
Results show that PBoost can outperform state of the art techniques in terms of both accuracy and complexity over different levels of imbalance and overlap between classes.
no code implementations • 6 Sep 2016 • Battista Biggio, Giorgio Fumera, Gian Luca Marcialis, Fabio Roli
Prior work has shown that multibiometric systems are vulnerable to presentation attacks, assuming that their matching score distribution is identical to that of genuine users, without fabricating any fake trait.
no code implementations • 30 Jan 2014 • Battista Biggio, Igino Corona, Blaine Nelson, Benjamin I. P. Rubinstein, Davide Maiorca, Giorgio Fumera, Giorgio Giacinto, and Fabio Roli
Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering.