1 code implementation • 19 Dec 2019 • Keane Lucas, Mahmood Sharif, Lujo Bauer, Michael K. Reiter, Saurabh Shintre
Moreover, we found that our attack can fool some commercial anti-viruses, in certain cases with a success rate of 85%.
no code implementations • 19 Nov 2019 • Javier Echauz, Keith Kenemer, Sarfaraz Hussein, Jay Dhaliwal, Saurabh Shintre, Slawomir Grzonkowski, Andrew Gardner
Machine learning models are vulnerable to adversarial inputs that induce seemingly unjustifiable errors.
no code implementations • 27 Jun 2018 • Jasjeet Dhaliwal, Saurabh Shintre
Deep neural networks are susceptible to small-but-specific adversarial perturbations capable of deceiving the network.
3 code implementations • 1 Mar 2017 • Reuben Feinman, Ryan R. Curtin, Saurabh Shintre, Andrew B. Gardner
Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input.