Search Results for author: Ravi Mangal

Found 11 papers, 3 papers with code

Mechanistically Interpreting a Transformer-based 2-SAT Solver: An Axiomatic Approach

1 code implementation18 Jul 2024 Nils Palumbo, Ravi Mangal, Zifan Wang, Saranya Vijayakumar, Corina S. Pasareanu, Somesh Jha

Inspired by the notion of abstract interpretation from the program analysis literature that aims to develop approximate semantics for programs, we give a set of axioms that formally characterize a mechanistic interpretation as a description that approximately captures the semantics of the neural network under analysis in a compositional manner.

Concept-based Analysis of Neural Networks via Vision-Language Models

no code implementations28 Mar 2024 Ravi Mangal, Nina Narodytska, Divya Gopinath, Boyue Caroline Hu, Anirban Roy, Susmit Jha, Corina Pasareanu

The analysis of vision-based deep neural networks (DNNs) is highly desirable but it is very challenging due to the difficulty of expressing formal specifications for vision tasks and the lack of efficient verification procedures.

Transfer Attacks and Defenses for Large Language Models on Coding Tasks

no code implementations22 Nov 2023 Chi Zhang, Zifan Wang, Ravi Mangal, Matt Fredrikson, Limin Jia, Corina Pasareanu

They improve upon previous neural network models of code, such as code2seq or seq2seq, that already demonstrated competitive results when performing tasks such as code summarization and identifying code vulnerabilities.

Code Summarization

Is Certifying $\ell_p$ Robustness Still Worthwhile?

no code implementations13 Oct 2023 Ravi Mangal, Klas Leino, Zifan Wang, Kai Hu, Weicheng Yu, Corina Pasareanu, Anupam Datta, Matt Fredrikson

There are three layers to this inquiry, which we address in this paper: (1) why do we care about robustness research?

Assumption Generation for the Verification of Learning-Enabled Autonomous Systems

no code implementations27 May 2023 Corina Pasareanu, Ravi Mangal, Divya Gopinath, Huafeng Yu

Our insight is that we can analyze the system in the absence of the DNN perception components by automatically synthesizing assumptions on the DNN behaviour that guarantee the satisfaction of the required safety properties.

Closed-loop Analysis of Vision-based Autonomous Systems: A Case Study

no code implementations6 Feb 2023 Corina S. Pasareanu, Ravi Mangal, Divya Gopinath, Sinem Getir Yaman, Calum Imrie, Radu Calinescu, Huafeng Yu

We address the above challenges by replacing the camera and the network with a compact probabilistic abstraction built from the confusion matrices computed for the DNN on a representative image data set.

On the Perils of Cascading Robust Classifiers

1 code implementation1 Jun 2022 Ravi Mangal, Zifan Wang, Chi Zhang, Klas Leino, Corina Pasareanu, Matt Fredrikson

We present \emph{cascade attack} (CasA), an adversarial attack against cascading ensembles, and show that: (1) there exists an adversarial input for up to 88\% of the samples where the ensemble claims to be certifiably robust and accurate; and (2) the accuracy of a cascading ensemble under our attack is as low as 11\% when it claims to be certifiably robust and accurate on 97\% of the test set.

Adversarial Attack

Discrete-Event Controller Synthesis for Autonomous Systems with Deep-Learning Perception Components

no code implementations7 Feb 2022 Radu Calinescu, Calum Imrie, Ravi Mangal, Genaína Nunes Rodrigues, Corina Păsăreanu, Misael Alpizar Santana, Gricel Vázquez

We use the method in simulation to synthesise controllers for mobile-robot collision mitigation and for maintaining driver attentiveness in shared-control autonomous driving.

Autonomous Driving Decision Making

Degradation Attacks on Certifiably Robust Neural Networks

no code implementations29 Sep 2021 Klas Leino, Chi Zhang, Ravi Mangal, Matt Fredrikson, Bryan Parno, Corina Pasareanu

Certifiably robust neural networks employ provable run-time defenses against adversarial examples by checking if the model is locally robust at the input under evaluation.

valid

Self-Correcting Neural Networks For Safe Classification

1 code implementation23 Jul 2021 Klas Leino, Aymeric Fromherz, Ravi Mangal, Matt Fredrikson, Bryan Parno, Corina Păsăreanu

These constraints relate requirements on the order of the classes output by a classifier to conditions on its input, and are expressive enough to encode various interesting examples of classifier safety specifications from the literature.

Classification

Robustness of Neural Networks: A Probabilistic and Practical Approach

no code implementations15 Feb 2019 Ravi Mangal, Aditya V. Nori, Alessandro Orso

Our algorithm uses abstract interpretation to approximate the behavior of a neural network and compute an overapproximation of the input regions that violate robustness.

Cannot find the paper you are looking for? You can Submit a new open access paper.