Search Results for author: Fabio Brau

Found 4 papers, 1 papers with code

1-Lipschitz Layers Compared: Memory, Speed, and Certifiable Robustness

1 code implementation28 Nov 2023 Bernd Prach, Fabio Brau, Giorgio Buttazzo, Christoph H. Lampert

The robustness of neural networks against input perturbations with bounded magnitude represents a serious concern in the deployment of deep learning models in safety-critical systems.

Robust-by-Design Classification via Unitary-Gradient Neural Networks

no code implementations9 Sep 2022 Fabio Brau, Giulio Rossolini, Alessandro Biondi, Giorgio Buttazzo

This work proposes a novel family of classifiers, namely Signed Distance Classifiers (SDCs), that, from a theoretical perspective, directly output the exact distance of x from the classification boundary, rather than a probability score (e. g., SoftMax).

Classification

Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis

no code implementations14 Mar 2022 Giulio Rossolini, Federico Nesti, Fabio Brau, Alessandro Biondi, Giorgio Buttazzo

This work presents Z-Mask, a robust and effective strategy to improve the adversarial robustness of convolutional networks against physically-realizable adversarial attacks.

Adversarial Robustness object-detection +2

On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error

no code implementations4 Jan 2022 Fabio Brau, Giulio Rossolini, Alessandro Biondi, Giorgio Buttazzo

In this regard, the Euclidean distance of the input from the classification boundary denotes a well-proved robustness assessment as the minimal affordable adversarial perturbation.

Cannot find the paper you are looking for? You can Submit a new open access paper.