Search Results for author: Saurabh Shintre

Found 4 papers, 2 papers with code

Gradient Similarity: An Explainable Approach to Detect Adversarial Attacks against Deep Learning

no code implementations27 Jun 2018 Jasjeet Dhaliwal, Saurabh Shintre

Deep neural networks are susceptible to small-but-specific adversarial perturbations capable of deceiving the network.

Detecting Adversarial Samples from Artifacts

3 code implementations1 Mar 2017 Reuben Feinman, Ryan R. Curtin, Saurabh Shintre, Andrew B. Gardner

Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input.

Density Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.