1 code implementation • 26 Feb 2022 • Metehan Cekic, Can Bakiskan, Upamanyu Madhow
While end-to-end training of Deep Neural Networks (DNNs) yields state of the art performance in an increasing array of applications, it does not provide insight into, or control over, the features being extracted.
1 code implementation • 12 Apr 2021 • Can Bakiskan, Metehan Cekic, Ahmet Dundar Sezer, Upamanyu Madhow
Deep Neural Networks are known to be vulnerable to small, adversarially crafted, perturbations.
1 code implementation • 21 Nov 2020 • Can Bakiskan, Metehan Cekic, Ahmet Dundar Sezer, Upamanyu Madhow
Our nominal design is to train the decoder and classifier together in standard supervised fashion, but we also consider unsupervised decoder training based on a regression objective (as in a conventional autoencoder) with separate supervised training of the classifier.
1 code implementation • 22 Feb 2020 • Can Bakiskan, Soorya Gopalakrishnan, Metehan Cekic, Upamanyu Madhow, Ramtin Pedarsani
The vulnerability of deep neural networks to small, adversarially designed perturbations can be attributed to their "excessive linearity."