Search Results for author: Corina Pasareanu

Found 11 papers, 3 papers with code

A Programmatic and Semantic Approach to Explaining and DebuggingNeural Network Based Object Detectors

no code implementations1 Dec 2019 Edward Kim, Divya Gopinath, Corina Pasareanu, Sanjit Seshia

It is programmatic in that scenario representation is a program in a domain-specific probabilistic programming language which can be used to generate synthetic data to test a given perception module.

Probabilistic Programming

Parallelization Techniques for Verifying Neural Networks

no code implementations17 Apr 2020 Haoze Wu, Alex Ozdemir, Aleksandar Zeljić, Ahmed Irfan, Kyle Julian, Divya Gopinath, Sadjad Fouladi, Guy Katz, Corina Pasareanu, Clark Barrett

Inspired by recent successes with parallel optimization techniques for solving Boolean satisfiability, we investigate a set of strategies and heuristics that aim to leverage parallel computing to improve the scalability of neural network verification.

NNrepair: Constraint-based Repair of Neural Network Classifiers

1 code implementation23 Mar 2021 Muhammad Usman, Divya Gopinath, Youcheng Sun, Yannic Noller, Corina Pasareanu

We present novel strategies to enable precise yet efficient repair such as inferring correctness specifications to act as oracles for intermediate layer repair, and generation of experts for each class.

Fault localization

Degradation Attacks on Certifiably Robust Neural Networks

no code implementations29 Sep 2021 Klas Leino, Chi Zhang, Ravi Mangal, Matt Fredrikson, Bryan Parno, Corina Pasareanu

Certifiably robust neural networks employ provable run-time defenses against adversarial examples by checking if the model is locally robust at the input under evaluation.

valid

On the Perils of Cascading Robust Classifiers

1 code implementation1 Jun 2022 Ravi Mangal, Zifan Wang, Chi Zhang, Klas Leino, Corina Pasareanu, Matt Fredrikson

We present \emph{cascade attack} (CasA), an adversarial attack against cascading ensembles, and show that: (1) there exists an adversarial input for up to 88\% of the samples where the ensemble claims to be certifiably robust and accurate; and (2) the accuracy of a cascading ensemble under our attack is as low as 11\% when it claims to be certifiably robust and accurate on 97\% of the test set.

Adversarial Attack

Toward Certified Robustness Against Real-World Distribution Shifts

1 code implementation8 Jun 2022 Haoze Wu, Teruhiro Tagomori, Alexander Robey, Fengjun Yang, Nikolai Matni, George Pappas, Hamed Hassani, Corina Pasareanu, Clark Barrett

We consider the problem of certifying the robustness of deep neural networks against real-world distribution shifts.

Assumption Generation for the Verification of Learning-Enabled Autonomous Systems

no code implementations27 May 2023 Corina Pasareanu, Ravi Mangal, Divya Gopinath, Huafeng Yu

Our insight is that we can analyze the system in the absence of the DNN perception components by automatically synthesizing assumptions on the DNN behaviour that guarantee the satisfaction of the required safety properties.

Is Certifying $\ell_p$ Robustness Still Worthwhile?

no code implementations13 Oct 2023 Ravi Mangal, Klas Leino, Zifan Wang, Kai Hu, Weicheng Yu, Corina Pasareanu, Anupam Datta, Matt Fredrikson

There are three layers to this inquiry, which we address in this paper: (1) why do we care about robustness research?

Transfer Attacks and Defenses for Large Language Models on Coding Tasks

no code implementations22 Nov 2023 Chi Zhang, Zifan Wang, Ravi Mangal, Matt Fredrikson, Limin Jia, Corina Pasareanu

They improve upon previous neural network models of code, such as code2seq or seq2seq, that already demonstrated competitive results when performing tasks such as code summarization and identifying code vulnerabilities.

Code Summarization

Inferring Properties of Graph Neural Networks

no code implementations8 Jan 2024 Dat Nguyen, Hieu M. Vu, Cong-Thanh Le, Bach Le, David Lo, ThanhVu Nguyen, Corina Pasareanu

To tackle the challenge of varying input structures in GNNs, GNNInfer first identifies a set of representative influential structures that contribute significantly towards the prediction of a GNN.

Backdoor Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.