Search Results for author: Ravi Mangal

Found 7 papers, 2 papers with code

Assumption Generation for the Verification of Learning-Enabled Autonomous Systems

no code implementations27 May 2023 Corina Pasareanu, Ravi Mangal, Divya Gopinath, Huafeng Yu

Our insight is that we can analyze the system in the absence of the DNN perception components by automatically synthesizing assumptions on the DNN behaviour that guarantee the satisfaction of the required safety properties.

Closed-loop Analysis of Vision-based Autonomous Systems: A Case Study

no code implementations6 Feb 2023 Corina S. Pasareanu, Ravi Mangal, Divya Gopinath, Sinem Getir Yaman, Calum Imrie, Radu Calinescu, Huafeng Yu

We address the above challenges by replacing the camera and the network with a compact probabilistic abstraction built from the confusion matrices computed for the DNN on a representative image data set.

On the Perils of Cascading Robust Classifiers

1 code implementation1 Jun 2022 Ravi Mangal, Zifan Wang, Chi Zhang, Klas Leino, Corina Pasareanu, Matt Fredrikson

We present \emph{cascade attack} (CasA), an adversarial attack against cascading ensembles, and show that: (1) there exists an adversarial input for up to 88\% of the samples where the ensemble claims to be certifiably robust and accurate; and (2) the accuracy of a cascading ensemble under our attack is as low as 11\% when it claims to be certifiably robust and accurate on 97\% of the test set.

Adversarial Attack

Discrete-Event Controller Synthesis for Autonomous Systems with Deep-Learning Perception Components

no code implementations7 Feb 2022 Radu Calinescu, Calum Imrie, Ravi Mangal, Genaína Nunes Rodrigues, Corina Păsăreanu, Misael Alpizar Santana, Gricel Vázquez

We use the method in simulation to synthesise controllers for mobile-robot collision mitigation and for maintaining driver attentiveness in shared-control autonomous driving.

Autonomous Driving Decision Making

Degradation Attacks on Certifiably Robust Neural Networks

no code implementations29 Sep 2021 Klas Leino, Chi Zhang, Ravi Mangal, Matt Fredrikson, Bryan Parno, Corina Pasareanu

Certifiably robust neural networks employ provable run-time defenses against adversarial examples by checking if the model is locally robust at the input under evaluation.

Self-Correcting Neural Networks For Safe Classification

1 code implementation23 Jul 2021 Klas Leino, Aymeric Fromherz, Ravi Mangal, Matt Fredrikson, Bryan Parno, Corina Păsăreanu

These constraints relate requirements on the order of the classes output by a classifier to conditions on its input, and are expressive enough to encode various interesting examples of classifier safety specifications from the literature.

Classification

Robustness of Neural Networks: A Probabilistic and Practical Approach

no code implementations15 Feb 2019 Ravi Mangal, Aditya V. Nori, Alessandro Orso

Our algorithm uses abstract interpretation to approximate the behavior of a neural network and compute an overapproximation of the input regions that violate robustness.

Cannot find the paper you are looking for? You can Submit a new open access paper.