no code implementations • 19 Nov 2023 • Giulio Rossolini, Alessandro Biondi, Giorgio Buttazzo
Deep neural networks exhibit excellent performance in computer vision tasks, but their vulnerability to real-world adversarial attacks, achieved through physical objects that can corrupt their predictions, raises serious security concerns for their application in safety-critical domains.
no code implementations • 28 Feb 2023 • Gianluca D'Amico, Mauro Marinoni, Federico Nesti, Giulio Rossolini, Giorgio Buttazzo, Salvatore Sabina, Gianluigi Lauro
The railway industry is searching for new ways to automate a number of complex train functions, such as object detection, track discrimination, and accurate train positioning, which require the artificial perception of the railway environment through different types of sensors, including cameras, LiDARs, wheel encoders, and inertial measurement units.
no code implementations • 9 Sep 2022 • Fabio Brau, Giulio Rossolini, Alessandro Biondi, Giorgio Buttazzo
This work proposes a novel family of classifiers, namely Signed Distance Classifiers (SDCs), that, from a theoretical perspective, directly output the exact distance of x from the classification boundary, rather than a probability score (e. g., SoftMax).
1 code implementation • 9 Jun 2022 • Federico Nesti, Giulio Rossolini, Gianluca D'Amico, Alessandro Biondi, Giorgio Buttazzo
Nevertheless, no much work has been devoted to the generation of datasets specifically designed to evaluate the adversarial robustness of neural models.
no code implementations • 14 Mar 2022 • Giulio Rossolini, Federico Nesti, Fabio Brau, Alessandro Biondi, Giorgio Buttazzo
This work presents Z-Mask, a robust and effective strategy to improve the adversarial robustness of convolutional networks against physically-realizable adversarial attacks.
2 code implementations • 5 Jan 2022 • Giulio Rossolini, Federico Nesti, Gianluca D'Amico, Saasha Nair, Alessandro Biondi, Giorgio Buttazzo
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks such as visual perception in autonomous driving.
no code implementations • 4 Jan 2022 • Fabio Brau, Giulio Rossolini, Alessandro Biondi, Giorgio Buttazzo
In this regard, the Euclidean distance of the input from the classification boundary denotes a well-proved robustness assessment as the minimal affordable adversarial perturbation.
1 code implementation • 13 Aug 2021 • Federico Nesti, Giulio Rossolini, Saasha Nair, Alessandro Biondi, Giorgio Buttazzo
Finally, a printed physical billboard containing an adversarial patch was tested in an outdoor driving scenario to assess the feasibility of the studied attacks in the real world.
no code implementations • 28 Jan 2021 • Giulio Rossolini, Alessandro Biondi, Giorgio Buttazzo
The great performance of machine learning algorithms and deep neural networks in several perception and control tasks is pushing the industry to adopt such technologies in safety-critical applications, as autonomous robots and self-driving vehicles.