Search Results for author: Corina S. Pasareanu

Found 7 papers, 4 papers with code

An Overview of Structural Coverage Metrics for Testing Neural Networks

1 code implementation5 Aug 2022 Muhammad Usman, Youcheng Sun, Divya Gopinath, Rishi Dange, Luca Manolache, Corina S. Pasareanu

Deep neural network (DNN) models, including those used in safety-critical domains, need to be thoroughly tested to ensure that they can reliably perform well in different scenarios.

DNN Testing

AntidoteRT: Run-time Detection and Correction of Poison Attacks on Neural Networks

1 code implementation31 Jan 2022 Muhammad Usman, Youcheng Sun, Divya Gopinath, Corina S. Pasareanu

For correction, we propose an input correction technique that uses a differential analysis to identify the trigger in the detected poisoned images, which is then reset to a neutral color.

Image Classification

DeepCert: Verification of Contextually Relevant Robustness for Neural Network Image Classifiers

no code implementations2 Mar 2021 Colin Paterson, Haoze Wu, John Grese, Radu Calinescu, Corina S. Pasareanu, Clark Barrett

We introduce DeepCert, a tool-supported method for verifying the robustness of deep neural network (DNN) image classifiers to contextually relevant perturbations such as blur, haze, and changes in image contrast.

Property Inference for Deep Neural Networks

1 code implementation29 Apr 2019 Divya Gopinath, Hayes Converse, Corina S. Pasareanu, Ankur Taly

We present techniques for automatically inferring formal properties of feed-forward neural networks.

DifFuzz: Differential Fuzzing for Side-Channel Analysis

2 code implementations16 Nov 2018 Shirin Nilizadeh, Yannic Noller, Corina S. Pasareanu

For this paper, we present an implementation that targets analysis of Java programs, and uses and extends the Kelinci and AFL fuzzers.

Cryptography and Security Software Engineering

Compositional Verification for Autonomous Systems with Deep Learning Components

no code implementations18 Oct 2018 Corina S. Pasareanu, Divya Gopinath, Huafeng Yu

As autonomy becomes prevalent in many applications, ranging from recommendation systems to fully autonomous vehicles, there is an increased need to provide safety guarantees for such systems.

Autonomous Vehicles Recommendation Systems

DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks

no code implementations2 Oct 2017 Divya Gopinath, Guy Katz, Corina S. Pasareanu, Clark Barrett

We propose a novel approach for automatically identifying safe regions of the input space, within which the network is robust against adversarial perturbations.

Adversarial Robustness Machine Translation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.