no code implementations • 16 Jul 2022 • William Chang, Hassan Hamad, Keith M. Chugg
Furthermore, we consider proposed signed-max-sum and max-star-sum generalizations of morphological ANNs and show that these variants also do not have universal approximation capabilities.
no code implementations • 16 Mar 2022 • Daniel Beauchamp, Keith M. Chugg
Current-steering (CS) digital-to-analog converters (DACs) generate analog signals by combining weighted current sources.
no code implementations • 20 Nov 2020 • Daniel Beauchamp, Keith M. Chugg
This paper proposes a novel foreground linearization scheme for a high-speed CS-DAC.
2 code implementations • 27 Mar 2020 • Sourya Dey, Saikrishna C. Kanala, Keith M. Chugg, Peter A. Beerel
In particular, we show the superiority of a greedy strategy and justify our choice of Bayesian optimization as the primary search methodology over random / grid search.
1 code implementation • 29 Jan 2020 • Souvik Kundu, Mahdi Nazemi, Massoud Pedram, Keith M. Chugg, Peter A. Beerel
We also compared the performance of our proposed architectures with that of ShuffleNet andMobileNetV2.
1 code implementation • 22 Oct 2019 • Arnab Sanyal, Peter A. Beerel, Keith M. Chugg
The high computational complexity associated with training deep neural networks limits online and real-time training on edge devices.
no code implementations • 2 Oct 2019 • Souvik Kundu, Saurav Prakash, Haleh Akrami, Peter A. Beerel, Keith M. Chugg
To explore the potential of this approach, we have experimented with two widely accepted datasets, CIFAR-10 and Tiny ImageNet, in sparse variants of both the ResNet18 and VGG16 architectures.
2 code implementations • 4 Dec 2018 • Sourya Dey, Kuan-Wen Huang, Peter A. Beerel, Keith M. Chugg
Neural networks have proven to be extremely powerful tools for modern artificial intelligence applications, but computational and storage complexity remain limiting factors.
2 code implementations • 11 Jul 2018 • Sourya Dey, Keith M. Chugg, Peter A. Beerel
The algorithm and datasets are open-source.
1 code implementation • 31 May 2018 • Sourya Dey, Diandian Chen, Zongyang Li, Souvik Kundu, Kuan-Wen Huang, Keith M. Chugg, Peter A. Beerel
We demonstrate an FPGA implementation of a parallel and reconfigurable architecture for sparse neural networks, capable of on-chip training and inference.
no code implementations • 18 Nov 2017 • Sourya Dey, Peter A. Beerel, Keith M. Chugg
We propose a class of interleavers for a novel deep neural network (DNN) architecture that uses algorithmically pre-determined, structured sparsity to significantly lower memory and computational requirements, and speed up training.
no code implementations • ICLR 2018 • Sourya Dey, Kuan-Wen Huang, Peter A. Beerel, Keith M. Chugg
We propose a novel way of reducing the number of parameters in the storage-hungry fully connected layers of a neural network by using pre-defined sparsity, where the majority of connections are absent prior to starting training.
no code implementations • 3 Nov 2017 • Sourya Dey, Yinan Shao, Keith M. Chugg, Peter A. Beerel
We propose a reconfigurable hardware architecture for deep neural networks (DNNs) capable of online training and inference, which uses algorithmically pre-determined, structured sparsity to significantly lower memory and computational requirements.