Deep neural network (DNN) models, including those used in safety-critical domains, need to be thoroughly tested to ensure that they can reliably perform well in different scenarios.
For correction, we propose an input correction technique that uses a differential analysis to identify the trigger in the detected poisoned images, which is then reset to a neutral color.
We introduce DeepCert, a tool-supported method for verifying the robustness of deep neural network (DNN) image classifiers to contextually relevant perturbations such as blur, haze, and changes in image contrast.
For this paper, we present an implementation that targets analysis of Java programs, and uses and extends the Kelinci and AFL fuzzers.
Cryptography and Security Software Engineering
As autonomy becomes prevalent in many applications, ranging from recommendation systems to fully autonomous vehicles, there is an increased need to provide safety guarantees for such systems.
We propose a novel approach for automatically identifying safe regions of the input space, within which the network is robust against adversarial perturbations.