no code implementations • 21 Jan 2024 • Siddharth Mansingh, Michal Kucer, Garrett Kenyon, Juston Moore, Michael Teti
Deep neural networks (DNNs) are easily fooled by adversarial perturbations that are imperceptible to humans.
no code implementations • 8 Dec 2021 • Michal Kucer, Diane Oyen, Garrett Kenyon
We identify primary ways in which self-supervision can be added to adversarial training, and observe that using a self-supervised loss to optimize both network parameters and find adversarial examples leads to the strongest improvement in model robustness, as this can be viewed as a form of ensemble adversarial training.
no code implementations • CVPR 2018 • Edward Kim, Darryl Hannan, Garrett Kenyon
The brain does not work solely in a feed-forward fashion, but rather all of the neurons are in competition with each other; neurons are integrating information in a bottom up and top down fashion and incorporating expectation and feedback in the modeling process.
no code implementations • 2 Nov 2017 • Mohit Dubey, Garrett Kenyon, Nils Carlson, Austin Thresher
The "cocktail party" problem of fully separating multiple sources from a single channel audio waveform remains unsolved.
no code implementations • 26 Oct 2017 • Yijing Watkins, Mohammad Sayeh, Oleksandr Iaroshenko, Garrett Kenyon
Bottleneck autoencoders have been actively researched as a solution to image compression tasks.
no code implementations • 15 Dec 2016 • Sunil Thulasidasan, Jeffrey Bilmes, Garrett Kenyon
We describe a computationally efficient, stochastic graph-regularization technique that can be utilized for the semi-supervised training of deep neural networks in a parallel or distributed setting.