1 code implementation • 28 Nov 2023 • Bernd Prach, Fabio Brau, Giorgio Buttazzo, Christoph H. Lampert
The robustness of neural networks against input perturbations with bounded magnitude represents a serious concern in the deployment of deep learning models in safety-critical systems.
no code implementations • 9 Sep 2022 • Fabio Brau, Giulio Rossolini, Alessandro Biondi, Giorgio Buttazzo
This work proposes a novel family of classifiers, namely Signed Distance Classifiers (SDCs), that, from a theoretical perspective, directly output the exact distance of x from the classification boundary, rather than a probability score (e. g., SoftMax).
no code implementations • 14 Mar 2022 • Giulio Rossolini, Federico Nesti, Fabio Brau, Alessandro Biondi, Giorgio Buttazzo
This work presents Z-Mask, a robust and effective strategy to improve the adversarial robustness of convolutional networks against physically-realizable adversarial attacks.
no code implementations • 4 Jan 2022 • Fabio Brau, Giulio Rossolini, Alessandro Biondi, Giorgio Buttazzo
In this regard, the Euclidean distance of the input from the classification boundary denotes a well-proved robustness assessment as the minimal affordable adversarial perturbation.