Search Results for author: Priyadarshini Panda

Found 79 papers, 23 papers with code

One-stage Prompt-based Continual Learning

no code implementations25 Feb 2024 Youngeun Kim, Yuhang Li, Priyadarshini Panda

With the QR loss, our approach maintains ~ 50% computational cost reduction during inference as well as outperforms the prior two-stage PCL methods by ~1. 4% on public class-incremental continual learning benchmarks including CIFAR-100, ImageNet-R, and DomainNet.

Continual Learning

ClipFormer: Key-Value Clipping of Transformers on Memristive Crossbars for Write Noise Mitigation

no code implementations4 Feb 2024 Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda

Specifically, we find pre-trained Vision Transformers (ViTs) to be vulnerable on crossbars due to the impact of write noise on the dynamically-generated Key (K) and Value (V) matrices in the attention layers, an effect not accounted for in prior studies.

TT-SNN: Tensor Train Decomposition for Efficient Spiking Neural Network Training

no code implementations15 Jan 2024 DongHyun Lee, Ruokai Yin, Youngeun Kim, Abhishek Moitra, Yuhang Li, Priyadarshini Panda

Spiking Neural Networks (SNNs) have gained significant attention as a potentially energy-efficient alternative for standard neural networks with their sparse binary activation.

Tensor Decomposition

GenQ: Quantization in Low Data Regimes with Generative Synthetic Data

no code implementations7 Dec 2023 Yuhang Li, Youngeun Kim, DongHyun Lee, Souvik Kundu, Priyadarshini Panda

In the realm of deep neural network deployment, low-bit quantization presents a promising avenue for enhancing computational efficiency.

Computational Efficiency Quantization +1

Rethinking Skip Connections in Spiking Neural Networks with Time-To-First-Spike Coding

no code implementations1 Dec 2023 Youngeun Kim, Adar Kahana, Ruokai Yin, Yuhang Li, Panos Stinis, George Em Karniadakis, Priyadarshini Panda

In this work, we delve into the role of skip connections, a widely used concept in Artificial Neural Networks (ANNs), within the domain of SNNs with TTFS coding.

Are SNNs Truly Energy-efficient? $-$ A Hardware Perspective

no code implementations6 Sep 2023 Abhiroop Bhattacharjee, Ruokai Yin, Abhishek Moitra, Priyadarshini Panda

Spiking Neural Networks (SNNs) have gained attention for their energy-efficient machine learning capabilities, utilizing bio-inspired activation functions and sparse binary spike-data representations.

Benchmarking

RobustEdge: Low Power Adversarial Detection for Cloud-Edge Systems

no code implementations5 Sep 2023 Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda

However, in prior detection works, the detector is attached to the classifier model and both detector and classifier work in tandem to perform adversarial detection that requires a high computational overhead which is not available at the low-power edge.

Adversarial Robustness Quantization

Artificial to Spiking Neural Networks Conversion for Scientific Machine Learning

no code implementations31 Aug 2023 Qian Zhang, Chenxi Wu, Adar Kahana, Youngeun Kim, Yuhang Li, George Em Karniadakis, Priyadarshini Panda

We introduce a method to convert Physics-Informed Neural Networks (PINNs), commonly used in scientific machine learning, to Spiking Neural Networks (SNNs), which are expected to have higher energy efficiency compared to traditional Artificial Neural Networks (ANNs).

Computational Efficiency

Examining the Role and Limits of Batchnorm Optimization to Mitigate Diverse Hardware-noise in In-memory Computing

no code implementations28 May 2023 Abhiroop Bhattacharjee, Abhishek Moitra, Youngeun Kim, Yeshwanth Venkatesha, Priyadarshini Panda

In-Memory Computing (IMC) platforms such as analog crossbars are gaining focus as they facilitate the acceleration of low-precision Deep Neural Networks (DNNs) with high area- & compute-efficiencies.

Quantization

Input-Aware Dynamic Timestep Spiking Neural Networks for Efficient In-Memory Computing

1 code implementation27 May 2023 Yuhang Li, Abhishek Moitra, Tamar Geller, Priyadarshini Panda

Although the efficiency of SNNs can be realized on the In-Memory Computing (IMC) architecture, we show that the energy cost and latency of SNNs scale linearly with the number of timesteps used on IMC hardware.

Sharing Leaky-Integrate-and-Fire Neurons for Memory-Efficient Spiking Neural Networks

no code implementations26 May 2023 Youngeun Kim, Yuhang Li, Abhishek Moitra, Ruokai Yin, Priyadarshini Panda

Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation.

Human Activity Recognition

Do We Really Need a Large Number of Visual Prompts?

no code implementations26 May 2023 Youngeun Kim, Yuhang Li, Abhishek Moitra, Priyadarshini Panda

Due to increasing interest in adapting models on resource-constrained edges, parameter-efficient transfer learning has been widely explored.

Transfer Learning Visual Prompt Tuning

MINT: Multiplier-less INTeger Quantization for Energy Efficient Spiking Neural Networks

1 code implementation16 May 2023 Ruokai Yin, Yuhang Li, Abhishek Moitra, Priyadarshini Panda

We propose Multiplier-less INTeger (MINT) quantization, a uniform quantization scheme that efficiently compresses weights and membrane potentials in spiking neural networks (SNNs).

Quantization

Divide-and-Conquer the NAS puzzle in Resource Constrained Federated Learning Systems

no code implementations11 May 2023 Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda

In this paper, we propose DC-NAS -- a divide-and-conquer approach that performs supernet-based Neural Architecture Search (NAS) in a federated system by systematically sampling the search space.

Federated Learning Neural Architecture Search +1

Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient

1 code implementation25 Apr 2023 Yuhang Li, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda

However, some essential questions exist pertaining to SNNs that are little studied: Do SNNs trained with surrogate gradient learn different representations from traditional Artificial Neural Networks (ANNs)?

NeuroBench: A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems

1 code implementation10 Apr 2023 Jason Yik, Korneel Van den Berghe, Douwe den Blanken, Younes Bouhadjar, Maxime Fabre, Paul Hueber, Denis Kleyko, Noah Pacik-Nelson, Pao-Sheng Vincent Sun, Guangzhi Tang, Shenqi Wang, Biyan Zhou, Soikat Hasan Ahmed, George Vathakkattil Joseph, Benedetto Leto, Aurora Micheli, Anurag Kumar Mishra, Gregor Lenz, Tao Sun, Zergham Ahmed, Mahmoud Akl, Brian Anderson, Andreas G. Andreou, Chiara Bartolozzi, Arindam Basu, Petrut Bogdan, Sander Bohte, Sonia Buckley, Gert Cauwenberghs, Elisabetta Chicca, Federico Corradi, Guido de Croon, Andreea Danielescu, Anurag Daram, Mike Davies, Yigit Demirag, Jason Eshraghian, Tobias Fischer, Jeremy Forest, Vittorio Fra, Steve Furber, P. Michael Furlong, William Gilpin, Aditya Gilra, Hector A. Gonzalez, Giacomo Indiveri, Siddharth Joshi, Vedant Karia, Lyes Khacef, James C. Knight, Laura Kriener, Rajkumar Kubendran, Dhireesha Kudithipudi, Yao-Hong Liu, Shih-Chii Liu, Haoyuan Ma, Rajit Manohar, Josep Maria Margarit-Taulé, Christian Mayr, Konstantinos Michmizos, Dylan Muir, Emre Neftci, Thomas Nowotny, Fabrizio Ottati, Ayca Ozcelikkale, Priyadarshini Panda, Jongkil Park, Melika Payvand, Christian Pehle, Mihai A. Petrovici, Alessandro Pierro, Christoph Posch, Alpha Renner, Yulia Sandamirskaya, Clemens JS Schaefer, André van Schaik, Johannes Schemmel, Samuel Schmidgall, Catherine Schuman, Jae-sun Seo, Sadique Sheik, Sumit Bam Shrestha, Manolis Sifalakis, Amos Sironi, Matthew Stewart, Kenneth Stewart, Terrence C. Stewart, Philipp Stratmann, Jonathan Timcheck, Nergis Tömen, Gianvito Urgese, Marian Verhelst, Craig M. Vineyard, Bernhard Vogginger, Amirreza Yousefzadeh, Fatima Tuz Zohora, Charlotte Frenkel, Vijay Janapa Reddi

The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings.

Benchmarking

SEENN: Towards Temporal Spiking Early-Exit Neural Networks

1 code implementation2 Apr 2023 Yuhang Li, Tamar Geller, Youngeun Kim, Priyadarshini Panda

However, we observe that the information capacity in SNNs is affected by the number of timesteps, leading to an accuracy-efficiency tradeoff.

XPert: Peripheral Circuit & Neural Architecture Co-search for Area and Energy-efficient Xbar-based Computing

1 code implementation30 Mar 2023 Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda

The hardware-efficiency and accuracy of Deep Neural Networks (DNNs) implemented on In-memory Computing (IMC) architectures primarily depend on the DNN architecture and the peripheral circuit parameters.

XploreNAS: Explore Adversarially Robust & Hardware-efficient Neural Architectures for Non-ideal Xbars

no code implementations15 Feb 2023 Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda

This work proposes a two-phase algorithm-hardware co-optimization approach called XploreNAS that searches for hardware-efficient & adversarially robust neural architectures for non-ideal crossbar platforms.

Adversarial Robustness Neural Architecture Search

Workload-Balanced Pruning for Sparse Spiking Neural Networks

no code implementations13 Feb 2023 Ruokai Yin, Youngeun Kim, Yuhang Li, Abhishek Moitra, Nitin Satpute, Anna Hambitzer, Priyadarshini Panda

Though the existing pruning methods can provide extremely high weight sparsity for deep SNNs, the high weight sparsity brings a workload imbalance problem.

DeepCAM: A Fully CAM-based Inference Accelerator with Variable Hash Lengths for Energy-efficient Deep Neural Networks

no code implementations9 Feb 2023 Duy-Thanh Nguyen, Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda

The first innovation is an approximate dot-product built on computations in the Euclidean space that can replace addition and multiplication with simple bit-wise operations.

Exploring Temporal Information Dynamics in Spiking Neural Networks

1 code implementation26 Nov 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Anna Hambitzer, Priyadarshini Panda

After training, we observe that information becomes highly concentrated in earlier few timesteps, a phenomenon we refer to as temporal information concentration.

SpikeSim: An end-to-end Compute-in-Memory Hardware Evaluation Tool for Benchmarking Spiking Neural Networks

2 code implementations24 Oct 2022 Abhishek Moitra, Abhiroop Bhattacharjee, Runcong Kuang, Gokul Krishnan, Yu Cao, Priyadarshini Panda

To this end, we propose SpikeSim, a tool that can perform realistic performance, energy, latency and area evaluation of IMC-mapped SNNs.

Benchmarking

Exploring Lottery Ticket Hypothesis in Spiking Neural Networks

1 code implementation4 Jul 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Ruokai Yin, Priyadarshini Panda

To scale up a pruning technique towards deep SNNs, we investigate Lottery Ticket Hypothesis (LTH) which states that dense networks contain smaller subnetworks (i. e., winning tickets) that achieve comparable performance to the dense networks.

Examining the Robustness of Spiking Neural Networks on Non-ideal Memristive Crossbars

no code implementations20 Jun 2022 Abhiroop Bhattacharjee, Youngeun Kim, Abhishek Moitra, Priyadarshini Panda

Spiking Neural Networks (SNNs) have recently emerged as the low-power alternative to Artificial Neural Networks (ANNs) owing to their asynchronous, sparse, and binary information processing.

SATA: Sparsity-Aware Training Accelerator for Spiking Neural Networks

1 code implementation11 Apr 2022 Ruokai Yin, Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda

Based on SATA, we show quantitative analyses of the energy efficiency of SNN training and compare the training cost of SNNs and ANNs.

Total Energy

Addressing Client Drift in Federated Continual Learning with Adaptive Optimization

no code implementations24 Mar 2022 Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Yuhang Li, Priyadarshini Panda

However, there is little attention towards additional challenges emerging when federated aggregation is performed in a continual learning system.

Continual Learning Federated Learning +1

Neuromorphic Data Augmentation for Training Spiking Neural Networks

1 code implementation11 Mar 2022 Yuhang Li, Youngeun Kim, Hyoungseob Park, Tamar Geller, Priyadarshini Panda

In an effort to minimize this generalization gap, we propose Neuromorphic Data Augmentation (NDA), a family of geometric augmentations specifically designed for event-based datasets with the goal of significantly stabilizing the SNN training and reducing the generalization gap between training and test performance.

 Ranked #1 on Event data classification on CIFAR10-DVS (using extra training data)

Contrastive Learning Data Augmentation +1

Adversarial Detection without Model Information

1 code implementation9 Feb 2022 Abhishek Moitra, Youngeun Kim, Priyadarshini Panda

We train a standalone detector independent of the classifier model, with a layer-wise energy separation (LES) training to increase the separation between natural and adversarial energies.

Neural Architecture Search for Spiking Neural Networks

1 code implementation23 Jan 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Priyadarshini Panda

Interestingly, SNASNet found by our search algorithm achieves higher performance with backward connections, demonstrating the importance of designing SNN architecture for suitably using temporal information.

Neural Architecture Search

Examining and Mitigating the Impact of Crossbar Non-idealities for Accurate Implementation of Sparse Deep Neural Networks

no code implementations13 Jan 2022 Abhiroop Bhattacharjee, Lakshya Bhatnagar, Priyadarshini Panda

Although, these techniques have claimed to preserve the accuracy of the sparse DNNs on crossbars, none have studied the impact of the inexorable crossbar non-idealities on the actual performance of the pruned networks.

Gradient-based Bit Encoding Optimization for Noise-Robust Binary Memristive Crossbar

no code implementations5 Jan 2022 Youngeun Kim, Hyunsoo Kim, Seijoon Kim, Sang Joon Kim, Priyadarshini Panda

In addition, we propose Gradient-based Bit Encoding Optimization (GBO) which optimizes a different number of pulses at each layer, based on our in-depth analysis that each layer has a different level of noise sensitivity.

Beyond Classification: Directly Training Spiking Neural Networks for Semantic Segmentation

no code implementations14 Oct 2021 Youngeun Kim, Joshua Chough, Priyadarshini Panda

Specifically, we first investigate two representative SNN optimization techniques for recognition tasks (i. e., ANN-SNN conversion and surrogate gradient learning) on semantic segmentation datasets.

Autonomous Vehicles Classification +2

Federated Learning with Spiking Neural Networks

1 code implementation11 Jun 2021 Yeshwanth Venkatesha, Youngeun Kim, Leandros Tassiulas, Priyadarshini Panda

To validate the proposed federated learning framework, we experimentally evaluate the advantages of SNNs on various aspects of federated learning with CIFAR10 and CIFAR100 benchmarks.

Federated Learning Privacy Preserving

Efficiency-driven Hardware Optimization for Adversarially Robust Neural Networks

no code implementations9 May 2021 Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda

In this paper, we show how the bit-errors in the 6T cells of hybrid 6T-8T memories minimize the adversarial perturbations in a DNN.

Adversarial Robustness

PrivateSNN: Privacy-Preserving Spiking Neural Networks

no code implementations7 Apr 2021 Youngeun Kim, Yeshwanth Venkatesha, Priyadarshini Panda

2) Class leakage is caused when class-related features can be reconstructed from network parameters.

Privacy Preserving

Visual Explanations from Spiking Neural Networks using Interspike Intervals

no code implementations26 Mar 2021 Youngeun Kim, Priyadarshini Panda

Spiking Neural Networks (SNNs) compute and communicate with asynchronous binary temporal events that can lead to significant energy savings with neuromorphic hardware.

Activation Density based Mixed-Precision Quantization for Energy Efficient Neural Networks

no code implementations12 Jan 2021 Karina Vasquez, Yeshwanth Venkatesha, Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda

Additionally, we find that integrating AD based quantization with AD based pruning (both conducted during training) yields up to ~198x and ~44x energy reductions for VGG19 and ResNet18 architectures respectively on PIM platform compared to baseline 16-bit precision, unpruned models.

Model Compression Quantization

Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection in Neural Networks

no code implementations5 Jan 2021 Rachel Sterneck, Abhishek Moitra, Priyadarshini Panda

Based on prior works on detecting adversaries, we propose a structured methodology of augmenting a deep neural network (DNN) with a detector subnetwork.

Quantization

Exposing the Robustness and Vulnerability of Hybrid 8T-6T SRAM Memory Architectures to Adversarial Attacks in Deep Neural Networks

no code implementations26 Nov 2020 Abhishek Moitra, Priyadarshini Panda

In this work, we elicit the advantages and vulnerabilities of hybrid 6T-8T memories to improve the adversarial robustness and cause adversarial attacks on DNNs.

Adversarial Robustness

Revisiting Batch Normalization for Training Low-latency Deep Spiking Neural Networks from Scratch

1 code implementation5 Oct 2020 Youngeun Kim, Priyadarshini Panda

Different from previous works, our proposed BNTT decouples the parameters in a BNTT layer along the time axis to capture the temporal dynamics of spikes.

Compression-aware Continual Learning using Singular Value Decomposition

1 code implementation3 Sep 2020 Varigonda Pavan Teja, Priyadarshini Panda

Specifically, we decompose the weight filters using SVD and train the network on incremental tasks in its factorized form.

Continual Learning Model Compression

Domain Adaptation without Source Data

3 code implementations3 Jul 2020 Youngeun Kim, Donghyeon Cho, Kyeongtak Han, Priyadarshini Panda, Sungeun Hong

Our key idea is to leverage a pre-trained model from the source domain and progressively update the target model in a self-learning manner.

Attribute Domain Adaptation +1

Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike Timing Dependent Backpropagation

1 code implementation ICLR 2020 Nitin Rathi, Gopalakrishnan Srinivasan, Priyadarshini Panda, Kaushik Roy

We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing.

Image Classification

QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of Neural Networks

no code implementations22 Apr 2020 Priyadarshini Panda

We identify a novel noise stability metric (ANS) for DNNs, i. e., the sensitivity of each layer's computation to adversarial noise.

Adversarial Robustness Quantization

Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations

1 code implementation ECCV 2020 Saima Sharmin, Nitin Rathi, Priyadarshini Panda, Kaushik Roy

Our results suggest that SNNs trained with LIF neurons and smaller number of timesteps are more robust than the ones with IF (Integrate-Fire) neurons and larger number of timesteps.

Adversarial Robustness Attribute

Pruning Filters while Training for Efficiently Optimizing Deep Learning Networks

no code implementations5 Mar 2020 Sourjya Roy, Priyadarshini Panda, Gopalakrishnan Srinivasan, Anand Raghunathan

Our results for VGG-16 trained on CIFAR10 shows that L1 normalization provides the best performance among all the techniques explored in this work with less than 1% drop in accuracy after pruning 80% of the filters compared to the original network.

Energy-efficient and Robust Cumulative Training with Net2Net Transformation

no code implementations2 Mar 2020 Aosong Feng, Priyadarshini Panda

We achieve this by first training a small network (with lesser parameters) on a small subset of the original dataset, and then gradually expanding the network using Net2Net transformation to train incrementally on larger subsets of the dataset.

Computational Efficiency

Relevant-features based Auxiliary Cells for Energy Efficient Detection of Natural Errors

no code implementations25 Feb 2020 Sai Aparna Aketi, Priyadarshini Panda, Kaushik Roy

To address this issue, we propose an ensemble of classifiers at hidden layers to enable energy efficient detection of natural errors.

Classification General Classification +1

Activation Density driven Energy-Efficient Pruning in Training

no code implementations7 Feb 2020 Timothy Foldy-Porto, Yeshwanth Venkatesha, Priyadarshini Panda

Neural network pruning with suitable retraining can yield networks with considerably fewer parameters than the original with comparable degrees of accuracy.

Network Pruning

Synthesizing Images from Spatio-Temporal Representations using Spike-based Backpropagation

no code implementations24 May 2019 Deboleena Roy, Priyadarshini Panda, Kaushik Roy

The spiking autoencoders are benchmarked on MNIST and Fashion-MNIST and achieve very low reconstruction loss, comparable to ANNs.

Image Generation

Evaluating the Stability of Recurrent Neural Models during Training with Eigenvalue Spectra Analysis

no code implementations8 May 2019 Priyadarshini Panda, Efstathia Soufleri, Kaushik Roy

We analyze the stability of recurrent networks, specifically, reservoir computing models during training by evaluating the eigenvalue spectra of the reservoir dynamics.

regression valid

A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks

no code implementations7 May 2019 Saima Sharmin, Priyadarshini Panda, Syed Shakib Sarwar, Chankyu Lee, Wachirawit Ponghiran, Kaushik Roy

In this work, we present, for the first time, a comprehensive analysis of the behavior of more bio-plausible networks, namely Spiking Neural Network (SNN) under state-of-the-art adversarial tests.

Adversarial Robustness

Discretization based Solutions for Secure Machine Learning against Adversarial Attacks

no code implementations8 Feb 2019 Priyadarshini Panda, Indranil Chakraborty, Kaushik Roy

Specifically, discretizing the input space (or allowed pixel levels from 256 values or 8-bit to 4 values or 2-bit) extensively improves the adversarial robustness of DLNs for a substantial range of perturbations for minimal loss in test accuracy.

Adversarial Robustness BIG-bench Machine Learning

A Low Effort Approach to Structured CNN Design Using PCA

no code implementations15 Dec 2018 Isha Garg, Priyadarshini Panda, Kaushik Roy

We demonstrate the proposed methodology on AlexNet and VGG style networks on the CIFAR-10, CIFAR-100 and ImageNet datasets, and successfully achieve an optimized architecture with a reduction of up to 3. 8X and 9X in the number of operations and parameters respectively, while trading off less than 1% accuracy.

Dimensionality Reduction Model Compression

Implicit Generative Modeling of Random Noise during Training for Adversarial Robustness

1 code implementation5 Jul 2018 Priyadarshini Panda, Kaushik Roy

We introduce a Noise-based prior Learning (NoL) approach for training neural networks that are intrinsically robust to adversarial attacks.

Adversarial Robustness

Exploiting Inherent Error-Resiliency of Neuromorphic Computing to achieve Extreme Energy-Efficiency through Mixed-Signal Neurons

no code implementations13 Jun 2018 Baibhab Chatterjee, Priyadarshini Panda, Shovan Maity, Ayan Biswas, Kaushik Roy, Shreyas Sen

In this work, we will analyze, compare and contrast existing neuron architectures with a proposed mixed-signal neuron (MS-N) in terms of performance, power and noise, thereby demonstrating the applicability of the proposed mixed-signal neuron for achieving extreme energy-efficiency in neuromorphic computing.

General Classification

Tree-CNN: A Hierarchical Deep Convolutional Neural Network for Incremental Learning

1 code implementation15 Feb 2018 Deboleena Roy, Priyadarshini Panda, Kaushik Roy

Over the past decade, Deep Convolutional Neural Networks (DCNNs) have shown remarkable performance in most computer vision tasks.

Incremental Learning Object Recognition

Chaos-guided Input Structuring for Improved Learning in Recurrent Neural Networks

no code implementations26 Dec 2017 Priyadarshini Panda, Kaushik Roy

Anatomical studies demonstrate that brain reformats input information to generate reliable responses for performing computations.

An Energy-Efficient Mixed-Signal Neuron for Inherently Error-Resilient Neuromorphic Systems

no code implementations24 Oct 2017 Baibhab Chatterjee, Priyadarshini Panda, Shovan Maity, Kaushik Roy, Shreyas Sen

This work presents the design and analysis of a mixed-signal neuron (MS-N) for convolutional neural networks (CNN) and compares its performance with a digital neuron (Dig-N) in terms of operating frequency, power and noise.

STDP Based Pruning of Connections and Weight Quantization in Spiking Neural Networks for Energy Efficient Recognition

no code implementations12 Oct 2017 Nitin Rathi, Priyadarshini Panda, Kaushik Roy

We present a sparse SNN topology where non-critical connections are pruned to reduce the network size and the remaining critical synapses are weight quantized to accommodate for limited conductance levels.

General Classification Quantization

Gabor Filter Assisted Energy Efficient Fast Learning Convolutional Neural Networks

no code implementations12 May 2017 Syed Shakib Sarwar, Priyadarshini Panda, Kaushik Roy

This combination creates a balanced system that gives better training performance in terms of energy and time, compared to the standalone CNN (without any Gabor kernels), in exchange for tolerable accuracy degradation.

Face Detection Object Recognition

ASP: Learning to Forget with Adaptive Synaptic Plasticity in Spiking Neural Networks

no code implementations22 Mar 2017 Priyadarshini Panda, Jason M. Allred, Shriram Ramanathan, Kaushik Roy

Against this backdrop, we present a novel unsupervised learning mechanism ASP (Adaptive Synaptic Plasticity) for improved recognition with Spiking Neural Networks (SNNs) for real time on-line learning in a dynamic environment.

Denoising

Convolutional Spike Timing Dependent Plasticity based Feature Learning in Spiking Neural Networks

no code implementations10 Mar 2017 Priyadarshini Panda, Gopalakrishnan Srinivasan, Kaushik Roy

Brain-inspired learning models attempt to mimic the cortical architecture and computations performed in the neurons and synapses constituting the human brain to achieve its efficiency in cognitive tasks.

Object Recognition

RESPARC: A Reconfigurable and Energy-Efficient Architecture with Memristive Crossbars for Deep Spiking Neural Networks

no code implementations20 Feb 2017 Aayush Ankit, Abhronil Sengupta, Priyadarshini Panda, Kaushik Roy

In this paper, we propose RESPARC - a reconfigurable and energy efficient architecture built-on Memristive Crossbar Arrays (MCA) for deep Spiking Neural Networks (SNNs).

2D Object Detection 2k

FALCON: Feature Driven Selective Classification for Energy-Efficient Image Recognition

no code implementations12 Sep 2016 Priyadarshini Panda, Aayush Ankit, Parami Wijesinghe, Kaushik Roy

We evaluate our approach for a 12-object classification task on the Caltech101 dataset and 10-object task on CIFAR-10 dataset by constructing FALCON models on the NeuE platform in 45nm technology.

BIG-bench Machine Learning Classification +1

Attention Tree: Learning Hierarchies of Visual Features for Large-Scale Image Recognition

no code implementations1 Aug 2016 Priyadarshini Panda, Kaushik Roy

A set of binary classifiers is organized on top of the learnt hierarchy to minimize the overall test-time complexity.

Image Classification Overall - Test

Unsupervised Regenerative Learning of Hierarchical Features in Spiking Deep Networks for Object Recognition

no code implementations3 Feb 2016 Priyadarshini Panda, Kaushik Roy

We present a spike-based unsupervised regenerative learning scheme to train Spiking Deep Networks (SpikeCNN) for object recognition problems using biologically realistic leaky integrate-and-fire neurons.

General Classification Object Recognition

Conditional Deep Learning for Energy-Efficient and Enhanced Pattern Recognition

no code implementations29 Sep 2015 Priyadarshini Panda, Abhronil Sengupta, Kaushik Roy

Deep learning neural networks have emerged as one of the most powerful classification tools for vision related applications.

Classification General Classification

Energy-Efficient Object Detection using Semantic Decomposition

no code implementations29 Sep 2015 Priyadarshini Panda, Swagath Venkataramani, Abhronil Sengupta, Anand Raghunathan, Kaushik Roy

We propose a 2-stage hierarchical classification framework, with increasing levels of complexity, wherein the first stage is trained to recognize the broad representative semantic features relevant to the object of interest.

General Classification Object +2

Cannot find the paper you are looking for? You can Submit a new open access paper.