Search Results for author: Keith M. Chugg

Found 13 papers, 6 papers with code

Approximation Capabilities of Neural Networks using Morphological Perceptrons and Generalizations

no code implementations16 Jul 2022 William Chang, Hassan Hamad, Keith M. Chugg

Furthermore, we consider proposed signed-max-sum and max-star-sum generalizations of morphological ANNs and show that these variants also do not have universal approximation capabilities.

Improved Analysis of Current-Steering DACs Using Equivalent Timing Errors

no code implementations16 Mar 2022 Daniel Beauchamp, Keith M. Chugg

Current-steering (CS) digital-to-analog converters (DACs) generate analog signals by combining weighted current sources.

Deep-n-Cheap: An Automated Search Framework for Low Complexity Deep Learning

2 code implementations27 Mar 2020 Sourya Dey, Saikrishna C. Kanala, Keith M. Chugg, Peter A. Beerel

In particular, we show the superiority of a greedy strategy and justify our choice of Bayesian optimization as the primary search methodology over random / grid search.

AutoML Bayesian Optimization

Pre-defined Sparsity for Low-Complexity Convolutional Neural Networks

1 code implementation29 Jan 2020 Souvik Kundu, Mahdi Nazemi, Massoud Pedram, Keith M. Chugg, Peter A. Beerel

We also compared the performance of our proposed architectures with that of ShuffleNet andMobileNetV2.

Neural Network Training with Approximate Logarithmic Computations

1 code implementation22 Oct 2019 Arnab Sanyal, Peter A. Beerel, Keith M. Chugg

The high computational complexity associated with training deep neural networks limits online and real-time training on edge devices.

A Pre-defined Sparse Kernel Based Convolution for Deep CNNs

no code implementations2 Oct 2019 Souvik Kundu, Saurav Prakash, Haleh Akrami, Peter A. Beerel, Keith M. Chugg

To explore the potential of this approach, we have experimented with two widely accepted datasets, CIFAR-10 and Tiny ImageNet, in sparse variants of both the ResNet18 and VGG16 architectures.

Pre-Defined Sparse Neural Networks with Hardware Acceleration

2 code implementations4 Dec 2018 Sourya Dey, Kuan-Wen Huang, Peter A. Beerel, Keith M. Chugg

Neural networks have proven to be extremely powerful tools for modern artificial intelligence applications, but computational and storage complexity remain limiting factors.

A Highly Parallel FPGA Implementation of Sparse Neural Network Training

1 code implementation31 May 2018 Sourya Dey, Diandian Chen, Zongyang Li, Souvik Kundu, Kuan-Wen Huang, Keith M. Chugg, Peter A. Beerel

We demonstrate an FPGA implementation of a parallel and reconfigurable architecture for sparse neural networks, capable of on-chip training and inference.

Interleaver Design for Deep Neural Networks

no code implementations18 Nov 2017 Sourya Dey, Peter A. Beerel, Keith M. Chugg

We propose a class of interleavers for a novel deep neural network (DNN) architecture that uses algorithmically pre-determined, structured sparsity to significantly lower memory and computational requirements, and speed up training.

Mathematical Proofs

Characterizing Sparse Connectivity Patterns in Neural Networks

no code implementations ICLR 2018 Sourya Dey, Kuan-Wen Huang, Peter A. Beerel, Keith M. Chugg

We propose a novel way of reducing the number of parameters in the storage-hungry fully connected layers of a neural network by using pre-defined sparsity, where the majority of connections are absent prior to starting training.

General Classification

Accelerating Training of Deep Neural Networks via Sparse Edge Processing

no code implementations3 Nov 2017 Sourya Dey, Yinan Shao, Keith M. Chugg, Peter A. Beerel

We propose a reconfigurable hardware architecture for deep neural networks (DNNs) capable of online training and inference, which uses algorithmically pre-determined, structured sparsity to significantly lower memory and computational requirements.

Cannot find the paper you are looking for? You can Submit a new open access paper.