Mitigating Deep Learning Vulnerabilities from Adversarial Examples Attack in the Cybersecurity Domain

9 May 2019  ·  Chris Einar San Agustin ·

Deep learning models are known to solve classification and regression problems by employing a number of epoch and training samples on a large dataset with optimal accuracy. However, that doesn't mean they are attack-proof or unexposed to vulnerabilities. Newly deployed systems particularly on a public environment (i.e public networks) are vulnerable to attacks from various entities. Moreover, published research on deep learning systems (Goodfellow et al., 2014) have determined a significant number of attacks points and a wide array of attack surface that has evidence of exploitation from adversarial examples. Successful exploit on these systems could lead to critical real world repercussions. For instance, (1) an adversarial attack on a self-driving car running a deep reinforcement learning system yields a direct misclassification on humans causing untoward accidents.(2) a self-driving vehicle misreading a red light signal may cause the car to crash to another car (3) misclassification of a pedestrian lane as an intersection lane that could lead to car crashes. This is just the tip of the iceberg, computer vision deployment are not entirely focused on self-driving cars but on many other areas as well - that would have definitive impact on the real-world. These vulnerabilities must be mitigated at an early stage of development. It is imperative to develop and implement baseline security standards at a global level prior to real-world deployment.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here