Machine-learning models can be fooled by adversarial examples, i. e., carefully-crafted input perturbations that force models to output wrong predictions.
Among Bayesian methods, Monte-Carlo dropout provides principled tools for evaluating the epistemic uncertainty of neural networks.
Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i. e., input samples that are carefully perturbed to cause misclassification at test time.
Learning in adversarial settings is becoming an important task for application domains where attackers may inject malicious data into the training set to subvert normal operation of data-driven technologies.
In several applications, input samples are more naturally represented in terms of similarities between each other, rather than in terms of feature vectors.
We propose a framework for empirical evaluation of classifier security that formalizes and generalizes the main ideas proposed in the literature, and give examples of its use in three real applications.
However, in such settings, they have been shown to be vulnerable to adversarial attacks, including the deliberate manipulation of data at test time to evade detection.
Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains.
Results show that PBoost can outperform state of the art techniques in terms of both accuracy and complexity over different levels of imbalance and overlap between classes.
Prior work has shown that multibiometric systems are vulnerable to presentation attacks, assuming that their matching score distribution is identical to that of genuine users, without fabricating any fake trait.
Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering.