1 code implementation • 24 Oct 2018 • Soorya Gopalakrishnan, Zhinus Marzi, Metehan Cekic, Upamanyu Madhow, Ramtin Pedarsani
We also devise attacks based on the locally linear model that outperform the well-known FGSM attack.
3 code implementations • 11 Mar 2018 • Soorya Gopalakrishnan, Zhinus Marzi, Upamanyu Madhow, Ramtin Pedarsani
It is by now well-known that small adversarial perturbations can induce classification errors in deep neural networks (DNNs).
no code implementations • 9 Mar 2018 • Zhinus Marzi, Joao Hespanha, Upamanyu Madhow
There is growing evidence regarding the importance of spike timing in neural information processing, with even a small number of spikes carrying information, but computational models lag significantly behind those for rate coding.
3 code implementations • 15 Jan 2018 • Zhinus Marzi, Soorya Gopalakrishnan, Upamanyu Madhow, Ramtin Pedarsani
In this paper, we study this phenomenon in the setting of a linear classifier, and show that it is possible to exploit sparsity in natural data to combat $\ell_{\infty}$-bounded adversarial perturbations.