Search Results for author: Dhireesha Kudithipudi

Found 18 papers, 2 papers with code

TENT: Efficient Quantization of Neural Networks on the tiny Edge with Tapered FixEd PoiNT

no code implementations6 Apr 2021 Hamed F. Langroudi, Vedant Karia, Tej Pandit, Dhireesha Kudithipudi

In this research, we propose a new low-precision framework, TENT, to leverage the benefits of a tapered fixed-point numerical format in TinyML models.

Quantization

End-to-End Memristive HTM System for Pattern Recognition and Sequence Prediction

no code implementations22 Jun 2020 Abdullah M. Zyarah, Kevin Gomez, Dhireesha Kudithipudi

We also illustrate that the system offers 3. 46X reduction in latency and 77. 02X reduction in power consumption when compared to a custom CMOS digital design implemented at the same technology node.

Edge-computing online learning

SIRNet: Understanding Social Distancing Measures with Hybrid Neural Network Model for COVID-19 Infectious Spread

2 code implementations22 Apr 2020 Nicholas Soures, David Chambers, Zachariah Carmichael, Anurag Daram, Dimpy P. Shah, Kal Clark, Lloyd Potter, Dhireesha Kudithipudi

The SARS-CoV-2 infectious outbreak has rapidly spread across the globe and precipitated varying policies to effectuate physical distancing to ameliorate its impact.

Populations and Evolution

Metaplasticity in Multistate Memristor Synaptic Networks

no code implementations26 Feb 2020 Fatima Tuz Zohora, Abdullah M. Zyarah, Nicholas Soures, Dhireesha Kudithipudi

In the $128\times128$ network, it is observed that the number of input patterns the multistate synapse can classify is $\simeq$ 2. 1x that of a simple binary synapse model, at a mean accuracy of $\geq$ 75% .

Continual Learning

Cheetah: Mixed Low-Precision Hardware & Software Co-Design Framework for DNNs on the Edge

no code implementations6 Aug 2019 Hamed F. Langroudi, Zachariah Carmichael, David Pastuch, Dhireesha Kudithipudi

Additionally, the framework is amenable for different quantization approaches and supports mixed-precision floating point and fixed-point numerical formats.

Quantization

Deep Learning Training on the Edge with Low-Precision Posits

no code implementations30 Jul 2019 Hamed F. Langroudi, Zachariah Carmichael, Dhireesha Kudithipudi

Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision ([5.. 8]-bit).

Analysis of Wide and Deep Echo State Networks for Multiscale Spatiotemporal Time Series Forecasting

no code implementations1 Jul 2019 Zachariah Carmichael, Humza Syed, Dhireesha Kudithipudi

Echo state networks are computationally lightweight reservoir models inspired by the random projections observed in cortical circuitry.

Time Series Time Series Forecasting

Performance-Efficiency Trade-off of Low-Precision Numerical Formats in Deep Neural Networks

no code implementations25 Mar 2019 Zachariah Carmichael, Hamed F. Langroudi, Char Khazanov, Jeffrey Lillie, John L. Gustafson, Dhireesha Kudithipudi

Our results indicate that posits are a natural fit for DNN inference, outperforming at $\leq$8-bit precision, and can be realized with competitive resource requirements relative to those of floating point.

Neuromemrisitive Architecture of HTM with On-Device Learning and Neurogenesis

no code implementations27 Dec 2018 Abdullah M. Zyarah, Dhireesha Kudithipudi

A memristor that is suitable for emulating the HTM synapses is identified and a new Z-window function is proposed.

Deep Positron: A Deep Neural Network Using the Posit Number System

no code implementations5 Dec 2018 Zachariah Carmichael, Hamed F. Langroudi, Char Khazanov, Jeffrey Lillie, John L. Gustafson, Dhireesha Kudithipudi

We propose a precision-adaptable FPGA soft core for exact multiply-and-accumulate for uniform comparison across three numerical formats, fixed, floating-point and posit.

PositNN: Tapered Precision Deep Learning Inference for the Edge

no code implementations20 Oct 2018 Hamed F. Langroudi, Zachariah Carmichael, John L. Gustafson, Dhireesha Kudithipudi

Conventional reduced-precision numerical formats, such as fixed-point and floating point, cannot accurately represent deep neural network parameters with a nonlinear distribution and small dynamic range.

Neuromorphic Architecture for the Hierarchical Temporal Memory

no code implementations17 Aug 2018 Abdullah M. Zyarah, Dhireesha Kudithipudi

The spatial pooler architecture is synthesized on Xilinx ZYNQ-7, with 91. 16% classification accuracy for MNIST and 90\% accuracy for EUNF, with noise.

Anomaly Detection Self-Learning

Mod-DeepESN: Modular Deep Echo State Network

no code implementations1 Aug 2018 Zachariah Carmichael, Humza Syed, Stuart Burtner, Dhireesha Kudithipudi

Neuro-inspired recurrent neural network algorithms, such as echo state networks, are computationally lightweight and thereby map well onto untethered devices.

Time Series Time Series Prediction

Deep Learning Inference on Embedded Devices: Fixed-Point vs Posit

no code implementations22 May 2018 Seyed H. F. Langroudi, Tej Pandit, Dhireesha Kudithipudi

Performing the inference step of deep learning in resource constrained environments, such as embedded devices, is challenging.

Quantization

On the Statistical Challenges of Echo State Networks and Some Potential Remedies

no code implementations20 Feb 2018 Qiuyi Wu, Ernest Fokoue, Dhireesha Kudithipudi

We create, develop and implement a family of predictably optimal robust and stable ensemble of Echo State Networks via regularizing the training and perturbing the input.

Convolutional Drift Networks for Video Classification

no code implementations3 Nov 2017 Dillon Graham, Seyed Hamed Fatemi Langroudi, Christopher Kanan, Dhireesha Kudithipudi

Analyzing spatio-temporal data like video is a challenging task that requires processing visual and temporal information effectively.

Classification General Classification +2

Unsupervised Learning in Neuromemristive Systems

no code implementations27 Jan 2016 Cory Merkel, Dhireesha Kudithipudi

Neuromemristive systems (NMSs) currently represent the most promising platform to achieve energy efficient neuro-inspired computation.

A Mathematical Formalization of Hierarchical Temporal Memory's Spatial Pooler

1 code implementation22 Jan 2016 James Mnatzaganian, Ernest Fokoué, Dhireesha Kudithipudi

Hierarchical temporal memory (HTM) is an emerging machine learning algorithm, with the potential to provide a means to perform predictions on spatiotemporal data.

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.