no code implementations • 9 Feb 2024 • Darryl Hannan, Ragib Arnab, Gavin Parpart, Garrett T. Kenyon, Edward Kim, Yijing Watkins
In this paper, we investigate the viability of event streams for overhead object detection.
no code implementations • 23 Aug 2023 • Sayanton V. Dibbo, Juston S. Moore, Garrett T. Kenyon, Michael A. Teti
Audio classification aims at recognizing audio signals, including speech commands or sound events.
no code implementations • 25 Jul 2023 • Gavin Parpart, Sumedh R. Risbud, Garrett T. Kenyon, Yijing Watkins
Neuromorphic processors have garnered considerable interest in recent years for their potential in energy-efficient and high-speed computing.
no code implementations • 2 Jun 2023 • Kyle Henke, Elijah Pelofske, Georg Hahn, Garrett T. Kenyon
We demonstrate neuromorphic computing is suitable for sampling low energy solutions of binary sparse coding QUBO models, and although Loihi 1 is capable of sampling very sparse solutions of the QUBO models, there needs to be improvement in the implementation in order to be competitive with simulated annealing.
no code implementations • 30 May 2022 • Gavin Parpart, Carlos Gonzalez, Terrence C. Stewart, Edward Kim, Jocelyn Rego, Andrew O'Brien, Steven Nesbit, Garrett T. Kenyon, Yijing Watkins
The Locally Competitive Algorithm (LCA) uses local competition between non-spiking leaky integrator neurons to infer sparse representations, allowing for potentially real-time execution on massively parallel neuromorphic architectures such as Intel's Loihi processor.
no code implementations • 3 Dec 2021 • Boram Yoon, Chia Cheng Chang, Garrett T. Kenyon, Nga T. T. Nguyen, Ermal Rrapaj
In the compression algorithm, we define a mapping from lattice QCD data of floating-point numbers to the binary coefficients that closely reconstruct the input data from a set of basis vectors.
no code implementations • ICML Workshop AML 2021 • Jacob M. Springer, Melanie Mitchell, Garrett T. Kenyon
Adversarial examples for neural networks are known to be transferable: examples optimized to be misclassified by a “source” network are often misclassified by other “destination” networks.
no code implementations • NeurIPS 2021 • Jacob M. Springer, Melanie Mitchell, Garrett T. Kenyon
Adversarial examples for neural network image classifiers are known to be transferable: examples optimized to be misclassified by a source classifier are often misclassified as well by classifiers with different architectures.
no code implementations • 9 Feb 2021 • Jacob M. Springer, Melanie Mitchell, Garrett T. Kenyon
The results we present in this paper provide new insight into the nature of the non-robust features responsible for adversarial vulnerability of neural network classifiers.
no code implementations • 23 Nov 2020 • Edward Kim, Maryam Daniali, Jocelyn Rego, Garrett T. Kenyon
Research has shown that neurons within the brain are selective to certain stimuli.
no code implementations • 3 Sep 2020 • Jacob M. Springer, Garrett T. Kenyon
To investigate how weight initializations affect performance, we examine small convolutional networks that are trained to predict n steps of the two-dimensional cellular automaton Conway's Game of Life, the update rules of which can be implemented efficiently in a 2n+1 layer convolutional network.
no code implementations • 14 Nov 2019 • Nga T. T. Nguyen, Garrett T. Kenyon, Boram Yoon
We propose a regression algorithm that utilizes a learned dictionary optimized for sparse inference on a D-Wave quantum annealer.
no code implementations • 28 May 2019 • Nga T. T. Nguyen, Garrett T. Kenyon
To establish a benchmark for classification performance on this reduced dimensional data set, we used an AlexNet-like architecture implemented in TensorFlow, obtaining a classification score of $94. 54 \pm 0. 7 \%$.
no code implementations • 17 Nov 2018 • Jacob M. Springer, Charles S. Strauss, Austin M. Thresher, Edward Kim, Garrett T. Kenyon
Although deep learning has shown great success in recent years, researchers have discovered a critical flaw where small, imperceptible changes in the input to the system can drastically change the output classification.
no code implementations • 26 Oct 2017 • Jacob Carroll, Nils Carlson, Garrett T. Kenyon
Neural networks are analogous in many ways to spin glasses, systems which are known for their rich set of dynamics and equally complex phase diagrams.
no code implementations • 19 May 2017 • Sheng Y. Lundquist, Melanie Mitchell, Garrett T. Kenyon
We show that replacing a typical supervised convolutional layer with an unsupervised sparse-coding layer within a DCNN allows for better performance on a car detection task when only a limited number of labeled training examples is available.
no code implementations • 17 Jun 2014 • Peter F. Schultz, Dylan M. Paiton, Wei Lu, Garrett T. Kenyon
We find, for example, that for 16x16-pixel receptive fields, using eight kernels and a stride of 2 leads to sparse reconstructions of comparable quality as using 512 kernels and a stride of 16 (the nonoverlapping case).