no code implementations • 14 Feb 2025 • Abhishek Moitra, Arkapravo Ghosh, Shrey Agarwal, Aporva Amarnath, Karthik Swaminathan, Priyadarshini Panda
While prior LLM-targeted quantization, and prior works on sparse acceleration have significantly mitigated the memory and computation bottleneck, they do so assuming high power platforms such as GPUs and server-class FPGAs with large off-chip memory bandwidths and employ a generalized matrix multiplication (GEMM) execution of all the layers in the decoder.
no code implementations • 10 Feb 2025 • Abhiroop Bhattacharjee, Jinquan Shi, Wei-Chen Chen, Xinxin Wang, Priyadarshini Panda
This work introduces a spike-based wearable analytics system utilizing Spiking Neural Networks (SNNs) deployed on an In-memory Computing engine based on RRAM crossbars, which are known for their compactness and energy-efficiency.
no code implementations • 4 Feb 2025 • Amit Ranjan Trivedi, Sina Tayebati, Hemant Kumawat, Nastaran Darabi, Divake Kumar, Adarsh Kumar Kosta, Yeshwanth Venkatesha, Dinithi Jayasuriya, Nethmi Jayasinghe, Priyadarshini Panda, Saibal Mukhopadhyay, Kaushik Roy
Autonomous edge computing in robotics, smart cities, and autonomous vehicles relies on the seamless integration of sensing, processing, and actuation for real-time decision-making in dynamic environments.
no code implementations • 24 Oct 2024 • Yuhang Li, Priyadarshini Panda
To effectively optimize the rounding in LLMs and stabilize the reconstruction process, we introduce progressive adaptive rounding.
no code implementations • 29 Sep 2024 • DongHyun Lee, Yuhang Li, Youngeun Kim, Shiting Xiao, Priyadarshini Panda
Spike-based Transformer presents a compelling and energy-efficient alternative to traditional Artificial Neural Network (ANN)-based Transformers, achieving impressive results through sparse binary computations.
1 code implementation • 3 Sep 2024 • Shiting Xiao, Yuhang Li, Youngeun Kim, DongHyun Lee, Priyadarshini Panda
Spiking Neural Networks (SNNs) have emerged as a compelling, energy-efficient alternative to traditional Artificial Neural Networks (ANNs) for static image tasks such as image classification and segmentation.
no code implementations • 22 Aug 2024 • Abhishek Moitra, Abhiroop Bhattacharjee, Yuhang Li, Youngeun Kim, Priyadarshini Panda
This review explores the intersection of bio-plausible artificial intelligence in the form of Spiking Neural Networks (SNNs) with the analog In-Memory Computing (IMC) domain, highlighting their collective potential for low-power edge computing environments.
no code implementations • 22 Aug 2024 • Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda
However, all prior works have neglected the overhead and co-depencence of attention blocks on the accuracy-energy-delay-area of IMC-implemented ViTs.
1 code implementation • 19 Jul 2024 • Ruokai Yin, Youngeun Kim, Di wu, Priyadarshini Panda
We observe that naively running a dual-sparse SNN on existing spMspM accelerators designed for dual-sparse Artificial Neural Networks (ANNs) exhibits sub-optimal efficiency.
no code implementations • 25 Feb 2024 • Youngeun Kim, Yuhang Li, Priyadarshini Panda
With the QR loss, our approach maintains ~ 50% computational cost reduction during inference as well as outperforms the prior two-stage PCL methods by ~1. 4% on public class-incremental continual learning benchmarks including CIFAR-100, ImageNet-R, and DomainNet.
no code implementations • 4 Feb 2024 • Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda
Specifically, we find pre-trained Vision Transformers (ViTs) to be vulnerable on crossbars due to the impact of write noise on the dynamically-generated Key (K) and Value (V) matrices in the attention layers, an effect not accounted for in prior studies.
no code implementations • 15 Jan 2024 • DongHyun Lee, Ruokai Yin, Youngeun Kim, Abhishek Moitra, Yuhang Li, Priyadarshini Panda
Spiking Neural Networks (SNNs) have gained significant attention as a potentially energy-efficient alternative for standard neural networks with their sparse binary activation.
no code implementations • 7 Dec 2023 • Yuhang Li, Youngeun Kim, DongHyun Lee, Souvik Kundu, Priyadarshini Panda
In the realm of deep neural network deployment, low-bit quantization presents a promising avenue for enhancing computational efficiency.
no code implementations • 1 Dec 2023 • Youngeun Kim, Adar Kahana, Ruokai Yin, Yuhang Li, Panos Stinis, George Em Karniadakis, Priyadarshini Panda
In this work, we delve into the role of skip connections, a widely used concept in Artificial Neural Networks (ANNs), within the domain of SNNs with TTFS coding.
no code implementations • 6 Sep 2023 • Abhiroop Bhattacharjee, Ruokai Yin, Abhishek Moitra, Priyadarshini Panda
Spiking Neural Networks (SNNs) have gained attention for their energy-efficient machine learning capabilities, utilizing bio-inspired activation functions and sparse binary spike-data representations.
no code implementations • 5 Sep 2023 • Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda
However, in prior detection works, the detector is attached to the classifier model and both detector and classifier work in tandem to perform adversarial detection that requires a high computational overhead which is not available at the low-power edge.
no code implementations • 31 Aug 2023 • Qian Zhang, Chenxi Wu, Adar Kahana, Youngeun Kim, Yuhang Li, George Em Karniadakis, Priyadarshini Panda
We introduce a method to convert Physics-Informed Neural Networks (PINNs), commonly used in scientific machine learning, to Spiking Neural Networks (SNNs), which are expected to have higher energy efficiency compared to traditional Artificial Neural Networks (ANNs).
no code implementations • 28 May 2023 • Abhiroop Bhattacharjee, Abhishek Moitra, Youngeun Kim, Yeshwanth Venkatesha, Priyadarshini Panda
In-Memory Computing (IMC) platforms such as analog crossbars are gaining focus as they facilitate the acceleration of low-precision Deep Neural Networks (DNNs) with high area- & compute-efficiencies.
1 code implementation • 27 May 2023 • Yuhang Li, Abhishek Moitra, Tamar Geller, Priyadarshini Panda
Although the efficiency of SNNs can be realized on the In-Memory Computing (IMC) architecture, we show that the energy cost and latency of SNNs scale linearly with the number of timesteps used on IMC hardware.
no code implementations • 26 May 2023 • Youngeun Kim, Yuhang Li, Abhishek Moitra, Ruokai Yin, Priyadarshini Panda
Due to increasing interest in adapting models on resource-constrained edges, parameter-efficient transfer learning has been widely explored.
no code implementations • 26 May 2023 • Youngeun Kim, Yuhang Li, Abhishek Moitra, Ruokai Yin, Priyadarshini Panda
Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation.
1 code implementation • 16 May 2023 • Ruokai Yin, Yuhang Li, Abhishek Moitra, Priyadarshini Panda
We propose Multiplier-less INTeger (MINT) quantization, a uniform quantization scheme that efficiently compresses weights and membrane potentials in spiking neural networks (SNNs).
no code implementations • 11 May 2023 • Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda
In this paper, we propose DC-NAS -- a divide-and-conquer approach that performs supernet-based Neural Architecture Search (NAS) in a federated system by systematically sampling the search space.
1 code implementation • 25 Apr 2023 • Yuhang Li, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda
However, some essential questions exist pertaining to SNNs that are little studied: Do SNNs trained with surrogate gradient learn different representations from traditional Artificial Neural Networks (ANNs)?
1 code implementation • 10 Apr 2023 • Jason Yik, Korneel Van den Berghe, Douwe den Blanken, Younes Bouhadjar, Maxime Fabre, Paul Hueber, Weijie Ke, Mina A Khoei, Denis Kleyko, Noah Pacik-Nelson, Alessandro Pierro, Philipp Stratmann, Pao-Sheng Vincent Sun, Guangzhi Tang, Shenqi Wang, Biyan Zhou, Soikat Hasan Ahmed, George Vathakkattil Joseph, Benedetto Leto, Aurora Micheli, Anurag Kumar Mishra, Gregor Lenz, Tao Sun, Zergham Ahmed, Mahmoud Akl, Brian Anderson, Andreas G. Andreou, Chiara Bartolozzi, Arindam Basu, Petrut Bogdan, Sander Bohte, Sonia Buckley, Gert Cauwenberghs, Elisabetta Chicca, Federico Corradi, Guido de Croon, Andreea Danielescu, Anurag Daram, Mike Davies, Yigit Demirag, Jason Eshraghian, Tobias Fischer, Jeremy Forest, Vittorio Fra, Steve Furber, P. Michael Furlong, William Gilpin, Aditya Gilra, Hector A. Gonzalez, Giacomo Indiveri, Siddharth Joshi, Vedant Karia, Lyes Khacef, James C. Knight, Laura Kriener, Rajkumar Kubendran, Dhireesha Kudithipudi, Shih-Chii Liu, Yao-Hong Liu, Haoyuan Ma, Rajit Manohar, Josep Maria Margarit-Taulé, Christian Mayr, Konstantinos Michmizos, Dylan R. Muir, Emre Neftci, Thomas Nowotny, Fabrizio Ottati, Ayca Ozcelikkale, Priyadarshini Panda, Jongkil Park, Melika Payvand, Christian Pehle, Mihai A. Petrovici, Christoph Posch, Alpha Renner, Yulia Sandamirskaya, Clemens JS Schaefer, André van Schaik, Johannes Schemmel, Samuel Schmidgall, Catherine Schuman, Jae-sun Seo, Sadique Sheik, Sumit Bam Shrestha, Manolis Sifalakis, Amos Sironi, Kenneth Stewart, Matthew Stewart, Terrence C. Stewart, Jonathan Timcheck, Nergis Tömen, Gianvito Urgese, Marian Verhelst, Craig M. Vineyard, Bernhard Vogginger, Amirreza Yousefzadeh, Fatima Tuz Zohora, Charlotte Frenkel, Vijay Janapa Reddi
To address these shortcomings, we present NeuroBench: a benchmark framework for neuromorphic computing algorithms and systems.
1 code implementation • 2 Apr 2023 • Yuhang Li, Tamar Geller, Youngeun Kim, Priyadarshini Panda
However, we observe that the information capacity in SNNs is affected by the number of timesteps, leading to an accuracy-efficiency tradeoff.
1 code implementation • 30 Mar 2023 • Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda
The hardware-efficiency and accuracy of Deep Neural Networks (DNNs) implemented on In-memory Computing (IMC) architectures primarily depend on the DNN architecture and the peripheral circuit parameters.
no code implementations • 15 Feb 2023 • Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda
This work proposes a two-phase algorithm-hardware co-optimization approach called XploreNAS that searches for hardware-efficient & adversarially robust neural architectures for non-ideal crossbar platforms.
no code implementations • 13 Feb 2023 • Ruokai Yin, Youngeun Kim, Yuhang Li, Abhishek Moitra, Nitin Satpute, Anna Hambitzer, Priyadarshini Panda
Though the existing pruning methods can provide extremely high weight sparsity for deep SNNs, the high weight sparsity brings a workload imbalance problem.
no code implementations • 9 Feb 2023 • Duy-Thanh Nguyen, Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda
The first innovation is an approximate dot-product built on computations in the Euclidean space that can replace addition and multiplication with simple bit-wise operations.
1 code implementation • 26 Nov 2022 • Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Anna Hambitzer, Priyadarshini Panda
After training, we observe that information becomes highly concentrated in earlier few timesteps, a phenomenon we refer to as temporal information concentration.
1 code implementation • 14 Nov 2022 • Yuhang Li, Ruokai Yin, Hyoungseob Park, Youngeun Kim, Priyadarshini Panda
SNNs allow spatio-temporal extraction of features and enjoy low-power computation with binary spikes.
2 code implementations • 24 Oct 2022 • Abhishek Moitra, Abhiroop Bhattacharjee, Runcong Kuang, Gokul Krishnan, Yu Cao, Priyadarshini Panda
To this end, we propose SpikeSim, a tool that can perform realistic performance, energy, latency and area evaluation of IMC-mapped SNNs.
1 code implementation • 4 Jul 2022 • Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Ruokai Yin, Priyadarshini Panda
To scale up a pruning technique towards deep SNNs, we investigate Lottery Ticket Hypothesis (LTH) which states that dense networks contain smaller subnetworks (i. e., winning tickets) that achieve comparable performance to the dense networks.
no code implementations • 20 Jun 2022 • Abhiroop Bhattacharjee, Youngeun Kim, Abhishek Moitra, Priyadarshini Panda
Spiking Neural Networks (SNNs) have recently emerged as the low-power alternative to Artificial Neural Networks (ANNs) owing to their asynchronous, sparse, and binary information processing.
no code implementations • 11 Apr 2022 • Abhiroop Bhattacharjee, Yeshwanth Venkatesha, Abhishek Moitra, Priyadarshini Panda
Recent years have seen a paradigm shift towards multi-task learning.
1 code implementation • 11 Apr 2022 • Ruokai Yin, Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda
Based on SATA, we show quantitative analyses of the energy efficiency of SNN training and compare the training cost of SNNs and ANNs.
no code implementations • 24 Mar 2022 • Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Yuhang Li, Priyadarshini Panda
However, there is little attention towards additional challenges emerging when federated aggregation is performed in a continual learning system.
1 code implementation • 11 Mar 2022 • Yuhang Li, Youngeun Kim, Hyoungseob Park, Tamar Geller, Priyadarshini Panda
In an effort to minimize this generalization gap, we propose Neuromorphic Data Augmentation (NDA), a family of geometric augmentations specifically designed for event-based datasets with the goal of significantly stabilizing the SNN training and reducing the generalization gap between training and test performance.
Ranked #1 on
Event data classification
on CIFAR10-DVS
(using extra training data)
1 code implementation • 9 Feb 2022 • Abhishek Moitra, Youngeun Kim, Priyadarshini Panda
We train a standalone detector independent of the classifier model, with a layer-wise energy separation (LES) training to increase the separation between natural and adversarial energies.
1 code implementation • 31 Jan 2022 • Youngeun Kim, Hyoungseob Park, Abhishek Moitra, Abhiroop Bhattacharjee, Yeshwanth Venkatesha, Priyadarshini Panda
Then, we measure the robustness of the coding techniques on two adversarial attack methods.
1 code implementation • 23 Jan 2022 • Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Priyadarshini Panda
Interestingly, SNASNet found by our search algorithm achieves higher performance with backward connections, demonstrating the importance of designing SNN architecture for suitably using temporal information.
no code implementations • 13 Jan 2022 • Abhiroop Bhattacharjee, Lakshya Bhatnagar, Priyadarshini Panda
Although, these techniques have claimed to preserve the accuracy of the sparse DNNs on crossbars, none have studied the impact of the inexorable crossbar non-idealities on the actual performance of the pruned networks.
no code implementations • 5 Jan 2022 • Youngeun Kim, Hyunsoo Kim, Seijoon Kim, Sang Joon Kim, Priyadarshini Panda
In addition, we propose Gradient-based Bit Encoding Optimization (GBO) which optimizes a different number of pulses at each layer, based on our in-depth analysis that each layer has a different level of noise sensitivity.
no code implementations • 14 Oct 2021 • Youngeun Kim, Joshua Chough, Priyadarshini Panda
Specifically, we first investigate two representative SNN optimization techniques for recognition tasks (i. e., ANN-SNN conversion and surrogate gradient learning) on semantic segmentation datasets.
no code implementations • 16 Sep 2021 • Adarsh Kumar Kosta, Malik Aqeel Anwar, Priyadarshini Panda, Arijit Raychowdhury, Kaushik Roy
To address this challenge, we propose a reconfigurable architecture with preemptive exits for efficient deep RL (RAPID-RL).
1 code implementation • 11 Jun 2021 • Yeshwanth Venkatesha, Youngeun Kim, Leandros Tassiulas, Priyadarshini Panda
To validate the proposed federated learning framework, we experimentally evaluate the advantages of SNNs on various aspects of federated learning with CIFAR10 and CIFAR100 benchmarks.
no code implementations • 9 May 2021 • Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda
In this paper, we show how the bit-errors in the 6T cells of hybrid 6T-8T memories minimize the adversarial perturbations in a DNN.
no code implementations • 7 Apr 2021 • Youngeun Kim, Yeshwanth Venkatesha, Priyadarshini Panda
2) Class leakage is caused when class-related features can be reconstructed from network parameters.
no code implementations • 26 Mar 2021 • Youngeun Kim, Priyadarshini Panda
Spiking Neural Networks (SNNs) compute and communicate with asynchronous binary temporal events that can lead to significant energy savings with neuromorphic hardware.
no code implementations • 12 Jan 2021 • Karina Vasquez, Yeshwanth Venkatesha, Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda
Additionally, we find that integrating AD based quantization with AD based pruning (both conducted during training) yields up to ~198x and ~44x energy reductions for VGG19 and ResNet18 architectures respectively on PIM platform compared to baseline 16-bit precision, unpruned models.
no code implementations • 5 Jan 2021 • Rachel Sterneck, Abhishek Moitra, Priyadarshini Panda
Based on prior works on detecting adversaries, we propose a structured methodology of augmenting a deep neural network (DNN) with a detector subnetwork.
no code implementations • 26 Nov 2020 • Abhishek Moitra, Priyadarshini Panda
In this work, we elicit the advantages and vulnerabilities of hybrid 6T-8T memories to improve the adversarial robustness and cause adversarial attacks on DNNs.
1 code implementation • 5 Oct 2020 • Youngeun Kim, Priyadarshini Panda
Different from previous works, our proposed BNTT decouples the parameters in a BNTT layer along the time axis to capture the temporal dynamics of spikes.
1 code implementation • 3 Sep 2020 • Varigonda Pavan Teja, Priyadarshini Panda
Specifically, we decompose the weight filters using SVD and train the network on incremental tasks in its factorized form.
no code implementations • 25 Aug 2020 • Abhiroop Bhattacharjee, Priyadarshini Panda
Deep Neural Networks (DNNs) have been shown to be prone to adversarial attacks.
3 code implementations • 3 Jul 2020 • Youngeun Kim, Donghyeon Cho, Kyeongtak Han, Priyadarshini Panda, Sungeun Hong
Our key idea is to leverage a pre-trained model from the source domain and progressively update the target model in a self-learning manner.
1 code implementation • ICLR 2020 • Nitin Rathi, Gopalakrishnan Srinivasan, Priyadarshini Panda, Kaushik Roy
We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing.
no code implementations • 22 Apr 2020 • Priyadarshini Panda
We identify a novel noise stability metric (ANS) for DNNs, i. e., the sensitivity of each layer's computation to adversarial noise.
1 code implementation • ECCV 2020 • Saima Sharmin, Nitin Rathi, Priyadarshini Panda, Kaushik Roy
Our results suggest that SNNs trained with LIF neurons and smaller number of timesteps are more robust than the ones with IF (Integrate-Fire) neurons and larger number of timesteps.
no code implementations • 5 Mar 2020 • Sourjya Roy, Priyadarshini Panda, Gopalakrishnan Srinivasan, Anand Raghunathan
Our results for VGG-16 trained on CIFAR10 shows that L1 normalization provides the best performance among all the techniques explored in this work with less than 1% drop in accuracy after pruning 80% of the filters compared to the original network.
no code implementations • 2 Mar 2020 • Aosong Feng, Priyadarshini Panda
We achieve this by first training a small network (with lesser parameters) on a small subset of the original dataset, and then gradually expanding the network using Net2Net transformation to train incrementally on larger subsets of the dataset.
no code implementations • 25 Feb 2020 • Sai Aparna Aketi, Priyadarshini Panda, Kaushik Roy
To address this issue, we propose an ensemble of classifiers at hidden layers to enable energy efficient detection of natural errors.
no code implementations • 7 Feb 2020 • Timothy Foldy-Porto, Yeshwanth Venkatesha, Priyadarshini Panda
Neural network pruning with suitable retraining can yield networks with considerably fewer parameters than the original with comparable degrees of accuracy.
no code implementations • 30 Oct 2019 • Priyadarshini Panda, Aparna Aketi, Kaushik Roy
Spiking Neural Networks (SNNs) may offer an energy-efficient alternative for implementing deep learning applications.
no code implementations • 24 May 2019 • Deboleena Roy, Priyadarshini Panda, Kaushik Roy
The spiking autoencoders are benchmarked on MNIST and Fashion-MNIST and achieve very low reconstruction loss, comparable to ANNs.
no code implementations • 8 May 2019 • Priyadarshini Panda, Efstathia Soufleri, Kaushik Roy
We analyze the stability of recurrent networks, specifically, reservoir computing models during training by evaluating the eigenvalue spectra of the reservoir dynamics.
no code implementations • 7 May 2019 • Saima Sharmin, Priyadarshini Panda, Syed Shakib Sarwar, Chankyu Lee, Wachirawit Ponghiran, Kaushik Roy
In this work, we present, for the first time, a comprehensive analysis of the behavior of more bio-plausible networks, namely Spiking Neural Network (SNN) under state-of-the-art adversarial tests.
no code implementations • 15 Mar 2019 • Chankyu Lee, Syed Shakib Sarwar, Priyadarshini Panda, Gopalakrishnan Srinivasan, Kaushik Roy
Spiking Neural Networks (SNNs) have recently emerged as a prominent neural computing paradigm.
no code implementations • 8 Feb 2019 • Priyadarshini Panda, Indranil Chakraborty, Kaushik Roy
Specifically, discretizing the input space (or allowed pixel levels from 256 values or 8-bit to 4 values or 2-bit) extensively improves the adversarial robustness of DLNs for a substantial range of perturbations for minimal loss in test accuracy.
no code implementations • 15 Dec 2018 • Isha Garg, Priyadarshini Panda, Kaushik Roy
We demonstrate the proposed methodology on AlexNet and VGG style networks on the CIFAR-10, CIFAR-100 and ImageNet datasets, and successfully achieve an optimized architecture with a reduction of up to 3. 8X and 9X in the number of operations and parameters respectively, while trading off less than 1% accuracy.
1 code implementation • 5 Jul 2018 • Priyadarshini Panda, Kaushik Roy
We introduce a Noise-based prior Learning (NoL) approach for training neural networks that are intrinsically robust to adversarial attacks.
no code implementations • 13 Jun 2018 • Baibhab Chatterjee, Priyadarshini Panda, Shovan Maity, Ayan Biswas, Kaushik Roy, Shreyas Sen
In this work, we will analyze, compare and contrast existing neuron architectures with a proposed mixed-signal neuron (MS-N) in terms of performance, power and noise, thereby demonstrating the applicability of the proposed mixed-signal neuron for achieving extreme energy-efficiency in neuromorphic computing.
1 code implementation • 15 Feb 2018 • Deboleena Roy, Priyadarshini Panda, Kaushik Roy
Over the past decade, Deep Convolutional Neural Networks (DCNNs) have shown remarkable performance in most computer vision tasks.
no code implementations • 26 Dec 2017 • Priyadarshini Panda, Kaushik Roy
Anatomical studies demonstrate that brain reformats input information to generate reliable responses for performing computations.
no code implementations • 24 Oct 2017 • Baibhab Chatterjee, Priyadarshini Panda, Shovan Maity, Kaushik Roy, Shreyas Sen
This work presents the design and analysis of a mixed-signal neuron (MS-N) for convolutional neural networks (CNN) and compares its performance with a digital neuron (Dig-N) in terms of operating frequency, power and noise.
no code implementations • 19 Oct 2017 • Priyadarshini Panda, Narayan Srinivasa
A fundamental challenge in machine learning today is to build a model that can learn from few examples.
no code implementations • 12 Oct 2017 • Nitin Rathi, Priyadarshini Panda, Kaushik Roy
We present a sparse SNN topology where non-critical connections are pruned to reduce the network size and the remaining critical synapses are weight quantized to accommodate for limited conductance levels.
no code implementations • 19 May 2017 • Akhilesh Jaiswal, Amogh Agrawal, Priyadarshini Panda, Kaushik Roy
The basic building blocks of such neuromorphic systems are neurons and synapses.
no code implementations • 12 May 2017 • Syed Shakib Sarwar, Priyadarshini Panda, Kaushik Roy
This combination creates a balanced system that gives better training performance in terms of energy and time, compared to the standalone CNN (without any Gabor kernels), in exchange for tolerable accuracy degradation.
no code implementations • 22 Mar 2017 • Priyadarshini Panda, Jason M. Allred, Shriram Ramanathan, Kaushik Roy
Against this backdrop, we present a novel unsupervised learning mechanism ASP (Adaptive Synaptic Plasticity) for improved recognition with Spiking Neural Networks (SNNs) for real time on-line learning in a dynamic environment.
no code implementations • 10 Mar 2017 • Priyadarshini Panda, Gopalakrishnan Srinivasan, Kaushik Roy
Brain-inspired learning models attempt to mimic the cortical architecture and computations performed in the neurons and synapses constituting the human brain to achieve its efficiency in cognitive tasks.
no code implementations • 20 Feb 2017 • Aayush Ankit, Abhronil Sengupta, Priyadarshini Panda, Kaushik Roy
In this paper, we propose RESPARC - a reconfigurable and energy efficient architecture built-on Memristive Crossbar Arrays (MCA) for deep Spiking Neural Networks (SNNs).
no code implementations • 12 Sep 2016 • Priyadarshini Panda, Aayush Ankit, Parami Wijesinghe, Kaushik Roy
We evaluate our approach for a 12-object classification task on the Caltech101 dataset and 10-object task on CIFAR-10 dataset by constructing FALCON models on the NeuE platform in 45nm technology.
no code implementations • 1 Aug 2016 • Priyadarshini Panda, Kaushik Roy
A set of binary classifiers is organized on top of the learnt hierarchy to minimize the overall test-time complexity.
no code implementations • 3 Feb 2016 • Priyadarshini Panda, Kaushik Roy
We present a spike-based unsupervised regenerative learning scheme to train Spiking Deep Networks (SpikeCNN) for object recognition problems using biologically realistic leaky integrate-and-fire neurons.
no code implementations • 29 Sep 2015 • Priyadarshini Panda, Abhronil Sengupta, Kaushik Roy
Deep learning neural networks have emerged as one of the most powerful classification tools for vision related applications.
no code implementations • 29 Sep 2015 • Priyadarshini Panda, Swagath Venkataramani, Abhronil Sengupta, Anand Raghunathan, Kaushik Roy
We propose a 2-stage hierarchical classification framework, with increasing levels of complexity, wherein the first stage is trained to recognize the broad representative semantic features relevant to the object of interest.