Search Results for author: Giulio Rossolini

Found 9 papers, 3 papers with code

Attention-Based Real-Time Defenses for Physical Adversarial Attacks in Vision Applications

no code implementations19 Nov 2023 Giulio Rossolini, Alessandro Biondi, Giorgio Buttazzo

Deep neural networks exhibit excellent performance in computer vision tasks, but their vulnerability to real-world adversarial attacks, achieved through physical objects that can corrupt their predictions, raises serious security concerns for their application in safety-critical domains.

TrainSim: A Railway Simulation Framework for LiDAR and Camera Dataset Generation

no code implementations28 Feb 2023 Gianluca D'Amico, Mauro Marinoni, Federico Nesti, Giulio Rossolini, Giorgio Buttazzo, Salvatore Sabina, Gianluigi Lauro

The railway industry is searching for new ways to automate a number of complex train functions, such as object detection, track discrimination, and accurate train positioning, which require the artificial perception of the railway environment through different types of sensors, including cameras, LiDARs, wheel encoders, and inertial measurement units.

object-detection Object Detection +1

Robust-by-Design Classification via Unitary-Gradient Neural Networks

no code implementations9 Sep 2022 Fabio Brau, Giulio Rossolini, Alessandro Biondi, Giorgio Buttazzo

This work proposes a novel family of classifiers, namely Signed Distance Classifiers (SDCs), that, from a theoretical perspective, directly output the exact distance of x from the classification boundary, rather than a probability score (e. g., SoftMax).

Classification

CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of Adversarial Robustness of Vision Models

1 code implementation9 Jun 2022 Federico Nesti, Giulio Rossolini, Gianluca D'Amico, Alessandro Biondi, Giorgio Buttazzo

Nevertheless, no much work has been devoted to the generation of datasets specifically designed to evaluate the adversarial robustness of neural models.

Adversarial Defense Adversarial Robustness +1

Defending From Physically-Realizable Adversarial Attacks Through Internal Over-Activation Analysis

no code implementations14 Mar 2022 Giulio Rossolini, Federico Nesti, Fabio Brau, Alessandro Biondi, Giorgio Buttazzo

This work presents Z-Mask, a robust and effective strategy to improve the adversarial robustness of convolutional networks against physically-realizable adversarial attacks.

Adversarial Robustness object-detection +2

On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving

2 code implementations5 Jan 2022 Giulio Rossolini, Federico Nesti, Gianluca D'Amico, Saasha Nair, Alessandro Biondi, Giorgio Buttazzo

The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks such as visual perception in autonomous driving.

Adversarial Robustness Autonomous Driving +2

On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error

no code implementations4 Jan 2022 Fabio Brau, Giulio Rossolini, Alessandro Biondi, Giorgio Buttazzo

In this regard, the Euclidean distance of the input from the classification boundary denotes a well-proved robustness assessment as the minimal affordable adversarial perturbation.

Evaluating the Robustness of Semantic Segmentation for Autonomous Driving against Real-World Adversarial Patch Attacks

1 code implementation13 Aug 2021 Federico Nesti, Giulio Rossolini, Saasha Nair, Alessandro Biondi, Giorgio Buttazzo

Finally, a printed physical billboard containing an adversarial patch was tested in an outdoor driving scenario to assess the feasibility of the studied attacks in the real world.

Autonomous Driving object-detection +2

Increasing the Confidence of Deep Neural Networks by Coverage Analysis

no code implementations28 Jan 2021 Giulio Rossolini, Alessandro Biondi, Giorgio Buttazzo

The great performance of machine learning algorithms and deep neural networks in several perception and control tasks is pushing the industry to adopt such technologies in safety-critical applications, as autonomous robots and self-driving vehicles.

Cannot find the paper you are looking for? You can Submit a new open access paper.