Search Results for author: Zhinus Marzi

Found 4 papers, 3 papers with code

Robust Adversarial Learning via Sparsifying Front Ends

1 code implementation24 Oct 2018 Soorya Gopalakrishnan, Zhinus Marzi, Metehan Cekic, Upamanyu Madhow, Ramtin Pedarsani

We also devise attacks based on the locally linear model that outperform the well-known FGSM attack.

Combating Adversarial Attacks Using Sparse Representations

3 code implementations11 Mar 2018 Soorya Gopalakrishnan, Zhinus Marzi, Upamanyu Madhow, Ramtin Pedarsani

It is by now well-known that small adversarial perturbations can induce classification errors in deep neural networks (DNNs).

General Classification

On the information in spike timing: neural codes derived from polychronous groups

no code implementations9 Mar 2018 Zhinus Marzi, Joao Hespanha, Upamanyu Madhow

There is growing evidence regarding the importance of spike timing in neural information processing, with even a small number of spikes carrying information, but computational models lag significantly behind those for rate coding.

Sparsity-based Defense against Adversarial Attacks on Linear Classifiers

3 code implementations15 Jan 2018 Zhinus Marzi, Soorya Gopalakrishnan, Upamanyu Madhow, Ramtin Pedarsani

In this paper, we study this phenomenon in the setting of a linear classifier, and show that it is possible to exploit sparsity in natural data to combat $\ell_{\infty}$-bounded adversarial perturbations.

Cannot find the paper you are looking for? You can Submit a new open access paper.