Search Results for author: Dhireesha Kudithipudi

Found 25 papers, 3 papers with code

A Mathematical Formalization of Hierarchical Temporal Memory's Spatial Pooler

1 code implementation22 Jan 2016 James Mnatzaganian, Ernest Fokoué, Dhireesha Kudithipudi

Hierarchical temporal memory (HTM) is an emerging machine learning algorithm, with the potential to provide a means to perform predictions on spatiotemporal data.

Attribute Dimensionality Reduction

Unsupervised Learning in Neuromemristive Systems

no code implementations27 Jan 2016 Cory Merkel, Dhireesha Kudithipudi

Neuromemristive systems (NMSs) currently represent the most promising platform to achieve energy efficient neuro-inspired computation.

BIG-bench Machine Learning Clustering

Convolutional Drift Networks for Video Classification

no code implementations3 Nov 2017 Dillon Graham, Seyed Hamed Fatemi Langroudi, Christopher Kanan, Dhireesha Kudithipudi

Analyzing spatio-temporal data like video is a challenging task that requires processing visual and temporal information effectively.

Classification General Classification +2

On the Statistical Challenges of Echo State Networks and Some Potential Remedies

no code implementations20 Feb 2018 Qiuyi Wu, Ernest Fokoue, Dhireesha Kudithipudi

We create, develop and implement a family of predictably optimal robust and stable ensemble of Echo State Networks via regularizing the training and perturbing the input.

Deep Learning Inference on Embedded Devices: Fixed-Point vs Posit

no code implementations22 May 2018 Seyed H. F. Langroudi, Tej Pandit, Dhireesha Kudithipudi

Performing the inference step of deep learning in resource constrained environments, such as embedded devices, is challenging.

Quantization

Mod-DeepESN: Modular Deep Echo State Network

no code implementations1 Aug 2018 Zachariah Carmichael, Humza Syed, Stuart Burtner, Dhireesha Kudithipudi

Neuro-inspired recurrent neural network algorithms, such as echo state networks, are computationally lightweight and thereby map well onto untethered devices.

Time Series Time Series Prediction

Neuromorphic Architecture for the Hierarchical Temporal Memory

no code implementations17 Aug 2018 Abdullah M. Zyarah, Dhireesha Kudithipudi

The spatial pooler architecture is synthesized on Xilinx ZYNQ-7, with 91. 16% classification accuracy for MNIST and 90\% accuracy for EUNF, with noise.

Anomaly Detection Self-Learning

PositNN: Tapered Precision Deep Learning Inference for the Edge

no code implementations20 Oct 2018 Hamed F. Langroudi, Zachariah Carmichael, John L. Gustafson, Dhireesha Kudithipudi

Conventional reduced-precision numerical formats, such as fixed-point and floating point, cannot accurately represent deep neural network parameters with a nonlinear distribution and small dynamic range.

Deep Positron: A Deep Neural Network Using the Posit Number System

no code implementations5 Dec 2018 Zachariah Carmichael, Hamed F. Langroudi, Char Khazanov, Jeffrey Lillie, John L. Gustafson, Dhireesha Kudithipudi

We propose a precision-adaptable FPGA soft core for exact multiply-and-accumulate for uniform comparison across three numerical formats, fixed, floating-point and posit.

Neuromemrisitive Architecture of HTM with On-Device Learning and Neurogenesis

no code implementations27 Dec 2018 Abdullah M. Zyarah, Dhireesha Kudithipudi

A memristor that is suitable for emulating the HTM synapses is identified and a new Z-window function is proposed.

Performance-Efficiency Trade-off of Low-Precision Numerical Formats in Deep Neural Networks

no code implementations25 Mar 2019 Zachariah Carmichael, Hamed F. Langroudi, Char Khazanov, Jeffrey Lillie, John L. Gustafson, Dhireesha Kudithipudi

Our results indicate that posits are a natural fit for DNN inference, outperforming at $\leq$8-bit precision, and can be realized with competitive resource requirements relative to those of floating point.

Analysis of Wide and Deep Echo State Networks for Multiscale Spatiotemporal Time Series Forecasting

no code implementations1 Jul 2019 Zachariah Carmichael, Humza Syed, Dhireesha Kudithipudi

Echo state networks are computationally lightweight reservoir models inspired by the random projections observed in cortical circuitry.

Time Series Time Series Forecasting

Deep Learning Training on the Edge with Low-Precision Posits

no code implementations30 Jul 2019 Hamed F. Langroudi, Zachariah Carmichael, Dhireesha Kudithipudi

Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision ([5.. 8]-bit).

Cheetah: Mixed Low-Precision Hardware & Software Co-Design Framework for DNNs on the Edge

no code implementations6 Aug 2019 Hamed F. Langroudi, Zachariah Carmichael, David Pastuch, Dhireesha Kudithipudi

Additionally, the framework is amenable for different quantization approaches and supports mixed-precision floating point and fixed-point numerical formats.

Quantization

Metaplasticity in Multistate Memristor Synaptic Networks

no code implementations26 Feb 2020 Fatima Tuz Zohora, Abdullah M. Zyarah, Nicholas Soures, Dhireesha Kudithipudi

In the $128\times128$ network, it is observed that the number of input patterns the multistate synapse can classify is $\simeq$ 2. 1x that of a simple binary synapse model, at a mean accuracy of $\geq$ 75% .

Continual Learning

SIRNet: Understanding Social Distancing Measures with Hybrid Neural Network Model for COVID-19 Infectious Spread

2 code implementations22 Apr 2020 Nicholas Soures, David Chambers, Zachariah Carmichael, Anurag Daram, Dimpy P. Shah, Kal Clark, Lloyd Potter, Dhireesha Kudithipudi

The SARS-CoV-2 infectious outbreak has rapidly spread across the globe and precipitated varying policies to effectuate physical distancing to ameliorate its impact.

Populations and Evolution

End-to-End Memristive HTM System for Pattern Recognition and Sequence Prediction

no code implementations22 Jun 2020 Abdullah M. Zyarah, Kevin Gomez, Dhireesha Kudithipudi

We also illustrate that the system offers 3. 46X reduction in latency and 77. 02X reduction in power consumption when compared to a custom CMOS digital design implemented at the same technology node.

Edge-computing Low-latency processing

TENT: Efficient Quantization of Neural Networks on the tiny Edge with Tapered FixEd PoiNT

no code implementations6 Apr 2021 Hamed F. Langroudi, Vedant Karia, Tej Pandit, Dhireesha Kudithipudi

In this research, we propose a new low-precision framework, TENT, to leverage the benefits of a tapered fixed-point numerical format in TinyML models.

Quantization

NeuroBench: A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems

1 code implementation10 Apr 2023 Jason Yik, Korneel Van den Berghe, Douwe den Blanken, Younes Bouhadjar, Maxime Fabre, Paul Hueber, Denis Kleyko, Noah Pacik-Nelson, Pao-Sheng Vincent Sun, Guangzhi Tang, Shenqi Wang, Biyan Zhou, Soikat Hasan Ahmed, George Vathakkattil Joseph, Benedetto Leto, Aurora Micheli, Anurag Kumar Mishra, Gregor Lenz, Tao Sun, Zergham Ahmed, Mahmoud Akl, Brian Anderson, Andreas G. Andreou, Chiara Bartolozzi, Arindam Basu, Petrut Bogdan, Sander Bohte, Sonia Buckley, Gert Cauwenberghs, Elisabetta Chicca, Federico Corradi, Guido de Croon, Andreea Danielescu, Anurag Daram, Mike Davies, Yigit Demirag, Jason Eshraghian, Tobias Fischer, Jeremy Forest, Vittorio Fra, Steve Furber, P. Michael Furlong, William Gilpin, Aditya Gilra, Hector A. Gonzalez, Giacomo Indiveri, Siddharth Joshi, Vedant Karia, Lyes Khacef, James C. Knight, Laura Kriener, Rajkumar Kubendran, Dhireesha Kudithipudi, Yao-Hong Liu, Shih-Chii Liu, Haoyuan Ma, Rajit Manohar, Josep Maria Margarit-Taulé, Christian Mayr, Konstantinos Michmizos, Dylan Muir, Emre Neftci, Thomas Nowotny, Fabrizio Ottati, Ayca Ozcelikkale, Priyadarshini Panda, Jongkil Park, Melika Payvand, Christian Pehle, Mihai A. Petrovici, Alessandro Pierro, Christoph Posch, Alpha Renner, Yulia Sandamirskaya, Clemens JS Schaefer, André van Schaik, Johannes Schemmel, Samuel Schmidgall, Catherine Schuman, Jae-sun Seo, Sadique Sheik, Sumit Bam Shrestha, Manolis Sifalakis, Amos Sironi, Matthew Stewart, Kenneth Stewart, Terrence C. Stewart, Philipp Stratmann, Jonathan Timcheck, Nergis Tömen, Gianvito Urgese, Marian Verhelst, Craig M. Vineyard, Bernhard Vogginger, Amirreza Yousefzadeh, Fatima Tuz Zohora, Charlotte Frenkel, Vijay Janapa Reddi

The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings.

Benchmarking

Design Principles for Lifelong Learning AI Accelerators

no code implementations5 Oct 2023 Dhireesha Kudithipudi, Anurag Daram, Abdullah M. Zyarah, Fatima Tuz Zohora, James B. Aimone, Angel Yanguas-Gil, Nicholas Soures, Emre Neftci, Matthew Mattina, Vincenzo Lomonaco, Clare D. Thiem, Benjamin Epstein

Lifelong learning - an agent's ability to learn throughout its lifetime - is a hallmark of biological learning systems and a central challenge for artificial intelligence (AI).

Continual Learning and Catastrophic Forgetting

no code implementations8 Mar 2024 Gido M. van de Ven, Nicholas Soures, Dhireesha Kudithipudi

This book chapter delves into the dynamics of continual learning, which is the process of incrementally learning from a non-stationary stream of data.

Continual Learning

Probabilistic Metaplasticity for Continual Learning with Memristors

no code implementations13 Mar 2024 Fatima Tuz Zohora, Vedant Karia, Nicholas Soures, Dhireesha Kudithipudi

The proposed mechanism eliminates high-precision modification to weight magnitude and consequently, high-precision memory for gradient accumulation.

Continual Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.