no code implementations • ECCV 2020 • Bing Han, Kaushik Roy
The real-valued ReLU activations in ANN are encoded using the spike-times of the TSC neurons in the converted TSC-SNN.
no code implementations • NAACL (CLPsych) 2022 • Adam Tsakalidis, Jenny Chim, Iman Munire Bilal, Ayah Zirikly, Dana Atzil-Slonim, Federico Nanni, Philip Resnik, Manas Gaur, Kaushik Roy, Becky Inkster, Jeff Leintz, Maria Liakata
We provide an overview of the CLPsych 2022 Shared Task, which focusses on the automatic identification of ‘Moments of Change’ in lon- gitudinal posts by individuals on social media and its connection with information regarding mental health .
no code implementations • 11 Sep 2023 • Abhisek Tiwari, Muhammed Sinan, Kaushik Roy, Amit Sheth, Sriparna Saha, Pushpak Bhattacharyya
These lexical-based metrics have the following key limitations: (a) word-to-word matching without semantic consideration: It assigns the same credit for failure to generate 'nice' and 'rice' for 'good'.
1 code implementation • 31 Jul 2023 • Kaushik Roy, Peyman Moghadam, Mehrtash Harandi
To address the problem, we propose a distillation strategy named L3DMC that operates on mixed-curvature spaces to preserve the already-learned knowledge by modeling and maintaining complex geometrical structures.
1 code implementation • 31 Jul 2023 • Kaushik Roy, Christian Simon, Peyman Moghadam, Mehrtash Harandi
To mitigate forgetting prior knowledge, we propose a novel knowledge distillation technique that takes into the account the manifold structure of the latent/output space of a neural network in learning novel tasks.
no code implementations • 11 Jul 2023 • Isha Garg, Kaushik Roy
Neural networks are overparametrized and easily overfit the datasets they train on.
no code implementations • 27 Jun 2023 • Marco Paul E. Apolinario, Kaushik Roy
Spiking Neural Networks (SNNs) are biologically plausible models that have been identified as potentially apt for the deployment for energy-efficient intelligence at the edge, particularly for sequential learning tasks.
no code implementations • 24 Jun 2023 • Yuxin Zi, Kaushik Roy, Vignesh Narayanan, Manas Gaur, Amit Sheth
Crowdsourced and expert-curated knowledge graphs such as ConceptNet are designed to capture the meaning of words from a compact set of well-defined contexts.
no code implementations • 23 Jun 2023 • Kaushik Roy, Yuxin Zi, Vignesh Narayanan, Manas Gaur, Amit Sheth
However, the ad-hoc nature of existing methods makes it difficult to properly analyze the effects of knowledge infusion on the many moving parts or components of a transformer.
no code implementations • 16 Jun 2023 • Kaushik Roy, Yuxin Zi, Manas Gaur, Jinendra Malekar, Qi Zhang, Vignesh Narayanan, Amit Sheth
In this study, we introduce Process Knowledge-infused Learning (PK-iL), a new learning paradigm that layers clinical process knowledge structures on language model outputs, enabling clinician-friendly explanations of the underlying language model predictions.
Explainable Artificial Intelligence (XAI)
Language Modelling
no code implementations • 5 Jun 2023 • Shubham Negi, Deepika Sharma, Adarsh Kumar Kosta, Kaushik Roy
Spiking Neural Networks (SNNs) with their asynchronous event-driven compute, show great potential for extracting the spatio-temporal features from these event streams.
1 code implementation • 1 Jun 2023 • Revathy Venkataramanan, Kaushik Roy, Kanak Raj, Renjith Prasad, Yuxin Zi, Vignesh Narayanan, Amit Sheth
In this study, we explore the use of generative AI methods to extend current food computation models, primarily involving the analysis of nutrition and ingredients, to also incorporate cooking actions (e. g., add salt, fry the meat, boil the vegetables, etc.).
no code implementations • 24 May 2023 • Yinghan Long, Sayeed Shafayet Chowdhury, Kaushik Roy
Then we propose a segmented recurrent transformer (SRformer) that combines segmented attention with recurrent attention.
no code implementations • 22 May 2023 • Amogh Joshi, Adarsh Kosta, Wachirawit Ponghiran, Manish Nagaraj, Kaushik Roy
The ability of living organisms to perform complex high speed manoeuvers in flight with a very small number of neurons and an incredibly low failure rate highlights the efficacy of these resource-constrained biological systems.
no code implementations • 13 May 2023 • Kaushik Roy, Manas Gaur, Misagh Soltani, Vipula Rawte, Ashwin Kalyan, Amit Sheth
LMs augmented with ProKnow guided method generated 89% safer questions in the depression and anxiety domain.
no code implementations • 8 May 2023 • Kaushik Roy, Tarun Garg, Vedant Palit, Yuxin Zi, Vignesh Narayanan, Amit Sheth
However, they do not ascribe object and concept-level meaning and semantics to the learned stochastic patterns such as those described in knowledge graphs.
1 code implementation • 8 May 2023 • Sai Aparna Aketi, Abolfazl Hashemi, Kaushik Roy
Decentralized learning enables the training of deep learning models over large distributed datasets generated at different locations, without the need for a central server.
no code implementations • 1 May 2023 • Amit Sheth, Kaushik Roy, Manas Gaur
Humans interact with the environment using a combination of perception - transforming sensory inputs from their environment into symbols, and cognition - mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning.
no code implementations • 14 Apr 2023 • Xuan-Bac Nguyen, Chi Nhan Duong, Marios Savvides, Kaushik Roy, Hugh Churchill, Khoa Luu
Promoting fairness for deep clustering models in unsupervised clustering settings to reduce demographic bias is a challenging goal.
no code implementations • 9 Apr 2023 • Deepak Ravikumar, Gobinda Saha, Sai Aparna Aketi, Kaushik Roy
The goal of IDKD is to homogenize the data distribution across the nodes.
no code implementations • 31 Mar 2023 • Kaushik Roy, Vedant Khandelwal, Raxit Goswami, Nathan Dolbir, Jinendra Malekar, Amit Sheth
After the pandemic, artificial intelligence (AI) powered support for mental health care has become increasingly important.
no code implementations • 27 Mar 2023 • Sakshi Choudhary, Sai Aparna Aketi, Gobinda Saha, Kaushik Roy
Decentralized learning allows serverless training with spatially distributed data.
no code implementations • 13 Mar 2023 • Shrihari Sridharan, Jacob R. Stevens, Kaushik Roy, Anand Raghunathan
Transformers have achieved great success in a wide variety of natural language processing (NLP) tasks due to the attention mechanism, which assigns an importance score for every word relative to other words in a sequence.
1 code implementation • 2 Feb 2023 • Gobinda Saha, Kaushik Roy
In neural networks, continual learning results in gradient interference among sequential tasks, leading to catastrophic forgetting of old tasks while learning new ones.
no code implementations • CVPR 2023 • Isha Garg, Kaushik Roy
SLo-curves identifies the samples with low curvatures as being more data-efficient and trains on them with an additional regularizer that penalizes high curvature of the loss surface in their vicinity.
no code implementations • 19 Nov 2022 • Shristi Das Biswas, Adarsh Kosta, Chamika Liyanagedera, Marco Apolinario, Kaushik Roy
Our hybrid network reaches state-of-the-art performance on real-world DDD-17, MVSEC and DSEC-Semantic datasets with up to $\sim 33\times$ higher parameter efficiency and favorable inference cost (17. 9mJ per cycle), making it suitable for resource-constrained edge applications.
no code implementations • 3 Nov 2022 • Marco Paul E. Apolinario, Adarsh Kumar Kosta, Utkarsh Saxena, Kaushik Roy
Spiking Neural Networks (SNNs) are bio-plausible models that hold great potential for realizing energy-efficient implementations of sequential tasks on resource-constrained edge devices.
Ranked #7 on
Gesture Recognition
on DVS128 Gesture
no code implementations • 16 Oct 2022 • Rawshan Ara Mowri, Madhuri Siddula, Kaushik Roy
This research work utilizes the frequencies of different API calls to detect and classify ransomware families.
no code implementations • 9 Oct 2022 • Kaushik Roy, Yuxin Zi, Vignesh Narayanan, Manas Gaur, Amit Sheth
Domain-specific language understanding requires integrating multiple pieces of relevant contextual information.
no code implementations • 6 Oct 2022 • Efstathia Soufleri, Gobinda Saha, Kaushik Roy
We evaluate our method on image classification dataset (CIFAR10) and show that our synthetic data can be used for training networks from scratch, producing reasonable classification performance.
1 code implementation • 3 Oct 2022 • Manish Nagaraj, Chamika Mihiranga Liyanagedera, Kaushik Roy
Our technique consists of a lightweight spiking neural architecture that is able to separate events based on the speed of the corresponding objects.
no code implementations • 3 Oct 2022 • Wachirawit Ponghiran, Chamika Mihiranga Liyanagedera, Kaushik Roy
As a result, optical flow is only evaluated at a frequency much lower than the rate data is produced by an event-based camera, leading to a temporally sparse optical flow estimation.
1 code implementation • 28 Sep 2022 • Sai Aparna Aketi, Sangamesh Kodge, Kaushik Roy
Our experiments demonstrate that \textit{NGC} and \textit{CompNGC} outperform (by $0-6\%$) the existing SoTA decentralized learning algorithm over non-IID data with significantly less compute and memory requirements.
no code implementations • 21 Sep 2022 • Adarsh Kumar Kosta, Kaushik Roy
Spiking Neural Networks (SNNs), with their neuro-inspired event-driven processing can efficiently handle such asynchronous data, while neuron models such as the leaky-integrate and fire (LIF) can keep track of the quintessential timing information contained in the inputs.
no code implementations • 19 Sep 2022 • Sourav Sanyal, Kaushik Roy
On the other hand, a regular data loss is used for adapting to residual disturbances (non-parametric uncertainties), unaccounted during mathematical modelling.
no code implementations • 18 Jun 2022 • Tarun Garg, Kaushik Roy, Amit Sheth
Knowledge Graphs are a great resource to capture semantic knowledge in terms of entities and relationships between the entities.
no code implementations • 9 Jun 2022 • Amit Sheth, Manas Gaur, Kaushik Roy, Revathy Venkataraman, Vedant Khandelwal
For such applications, in addition to data and domain knowledge, the AI systems need to have access to and use the Process Knowledge, an ordered set of steps that the AI system needs to use or adhere to.
no code implementations • 3 Jun 2022 • Wilfried Haensch, Anand Raghunathan, Kaushik Roy, Bhaswar Chakrabart, Charudatta M. Phatak, Cheng Wang, Supratik Guha
In the second part, we review what is knows about the different new non-volatile memory materials and devices suited for compute in-memory, and discuss the outlook and challenges.
1 code implementation • NAACL (CLPsych) 2022 • Shrey Gupta, Anmol Agarwal, Manas Gaur, Kaushik Roy, Vignesh Narayanan, Ponnurangam Kumaraguru, Amit Sheth
We demonstrate the challenge of using existing datasets to train a DLM for generating FQs that adhere to clinical process knowledge.
no code implementations • 6 May 2022 • Deepak Ravikumar, Kaushik Roy
Therefore, applying a single threshold for all classes is not ideal since the same similarity score represents different uncertainties for different classes.
no code implementations • 26 Apr 2022 • Kaushik Roy, Manas Gaur, Qi Zhang, Amit Sheth
Improving the performance and natural language explanations of deep learning algorithms is a priority for adoption by humans in the real world.
no code implementations • 25 Mar 2022 • Md Mazharul Islam, Shamiul Alam, Md Shafayat Hossain, Kaushik Roy, Ahmedullah Aziz
The revolution in artificial intelligence (AI) brings up an enormous storage and data processing requirement.
no code implementations • 21 Jan 2022 • Isha Garg, Manish Nagaraj, Kaushik Roy
This is done via a central server that aggregates learning in the form of weight updates.
no code implementations • 20 Dec 2021 • Amitangshu Mukherjee, Isha Garg, Kaushik Roy
We show that learning in this structured hierarchical manner results in networks that are more robust against subpopulation shifts, with an improvement up to 3\% in terms of accuracy and up to 11\% in terms of graphical distance over standard models on subpopulation shift benchmarks.
no code implementations • 17 Nov 2021 • Jason Allred, Kaushik Roy
Converted SNNs function sufficiently well because the mean pre-firing membrane potential of a spiking neuron is proportional to the dot product of the input rate vector and the neuron weight vector, similar to the functionality of a non-spiking network.
1 code implementation • 17 Nov 2021 • Sai Aparna Aketi, Sangamesh Kodge, Kaushik Roy
In this paper, we propose and show the convergence of low precision decentralized training that aims to reduce the computational complexity and communication cost of decentralized training.
no code implementations • 18 Oct 2021 • Sangamesh Kodge, Kaushik Roy
Experiments on the probing task from SentEval dataset show that our model performs up to $4. 65\%$ better in accuracy than the baseline with an average improvement of $2. 67\%$ on the semantic tasks.
1 code implementation • 1 Oct 2021 • Sayeed Shafayet Chowdhury, Nitin Rathi, Kaushik Roy
We achieve top-1 accuracy of 93. 05%, 70. 15% and 67. 71% on CIFAR-10, CIFAR-100 and ImageNet, respectively using VGG16, with just 1 timestep.
no code implementations • 29 Sep 2021 • Deepak Ravikumar, Sangamesh Kodge, Isha Garg, Kaushik Roy
This reduces the separability of in-distribution data from OoD data.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
1 code implementation • 19 Sep 2021 • Chun Tao, Deboleena Roy, Indranil Chakraborty, Kaushik Roy
First, we study the noise stability of such networks on unperturbed inputs and observe that internal activations of adversarially trained networks have lower Signal-to-Noise Ratio (SNR), and are sensitive to noise compared to vanilla networks.
no code implementations • 16 Sep 2021 • Adarsh Kumar Kosta, Malik Aqeel Anwar, Priyadarshini Panda, Arijit Raychowdhury, Kaushik Roy
To address this challenge, we propose a reconfigurable architecture with preemptive exits for efficient deep RL (RAPID-RL).
no code implementations • 14 Sep 2021 • Bing Han, Cheng Wang, Kaushik Roy
To address these challenges, we propose a novel neuron model that has cosine activation with a time varying component for sequential processing.
no code implementations • 14 Sep 2021 • Yinghan Long, Indranil Chakraborty, Gopalakrishnan Srinivasan, Kaushik Roy
Only data with high probabilities of belonging to hard classes would be sent to the extension block for prediction.
1 code implementation • 10 Sep 2021 • Gobinda Saha, Kaushik Roy
One way to enable such learning is to store past experiences in the form of input examples in episodic memory and replay them when learning new tasks.
1 code implementation • 4 Sep 2021 • Wachirawit Ponghiran, Kaushik Roy
We show that SNNs can be trained for sequential tasks and propose modifications to a network of LIF neurons that enable internal states to learn long sequences and make their inherent recurrence resilient to the vanishing gradient problem.
no code implementations • 2 Aug 2021 • Amit Sheth, Manas Gaur, Kaushik Roy, Keyur Faldu
To understand and validate an AI system's outcomes (such as classification, recommendations, predictions), that lead to developing trust in the AI system, it is necessary to involve explicit domain knowledge that humans understand and use.
Decision Making
Explainable Artificial Intelligence (XAI)
+1
no code implementations • 25 Jun 2021 • Kaushik Roy, Qi Zhang, Manas Gaur, Amit Sheth
Contextual Bandits find important use cases in various real-life scenarios such as online advertising, recommendation systems, healthcare, etc.
no code implementations • 23 Jun 2021 • Shubham Negi, Indranil Chakraborty, Aayush Ankit, Kaushik Roy
The hardware efficiency (energy, latency and area) as well as application accuracy (considering device and circuit non-idealities) of DNNs mapped to such hardware are co-dependent on network parameters, such as kernel size, depth etc.
no code implementations • 13 May 2021 • Nathan Dolbir, Triyasha Dastidar, Kaushik Roy
AI chatbots have made vast strides in technology improvement in recent years and are already operational in many industries.
no code implementations • 12 May 2021 • Manas Gaur, Kaushik Roy, Aditya Sharma, Biplav Srivastava, Amit Sheth
During the ongoing COVID-19 crisis, subreddits on Reddit, such as r/Coronavirus saw a rapid growth in user's requests for help (support seekers - SSs) including individuals with varying professions and experiences with diverse perspectives on care (support providers - SPs).
no code implementations • 26 Apr 2021 • Sayeed Shafayet Chowdhury, Isha Garg, Kaushik Roy
Moreover, they require 8-14X lesser compute energy compared to their unpruned standard deep learning counterparts.
no code implementations • 29 Mar 2021 • Usha Lokala, Francois Lamy, Triyasha Ghosh Dastidar, Kaushik Roy, Raminta Daniulaityte, Srinivasan Parthasarathy, Amit Sheth
However, the lack of evidence on the relationship has resulted in opioids being largely inaccessible through legal means.
no code implementations • 19 Mar 2021 • Chankyu Lee, Adarsh Kumar Kosta, Kaushik Roy
Standard frame-based cameras that sample light intensity frames are heavily impacted by motion blur for high-speed motion and fail to perceive scene accurately when the dynamic range is high.
1 code implementation • ICLR 2021 • Gobinda Saha, Isha Garg, Kaushik Roy
The ability to learn continually without forgetting the past tasks is a desired attribute for artificial learning systems.
no code implementations • 11 Feb 2021 • Kaushik Roy, Qi Zhang, Manas Gaur, Amit Sheth
To this end, we introduce a mathematical framework for KIPG methods that can (a) induce relevant feature counts over multi-relational features of the world, (b) handle latent non-homogeneous counts as hidden variables that are linear combinations of kernelized aggregates over the features, and (b) infuse knowledge as functional constraints in a principled manner.
no code implementations • 1 Feb 2021 • Kaushik Roy, Usha Lokala, Vedant Khandelwal, Amit Sheth
With strong marketing advocacy of the benefits of cannabis use for improved mental health, cannabis legalization is a priority among legislators.
no code implementations • ICCV 2021 • Isha Garg, Sayeed Shafayet Chowdhury, Kaushik Roy
Notably, DCT-SNN performs inference with 2-14X reduced latency compared to other state-of-the-art SNNs, while achieving comparable accuracy to their standard deep learning counterparts.
no code implementations • 1 Jan 2021 • Nitin Rathi, Kaushik Roy
The trained membrane leak controls the flow of input information and attenuates irrelevant inputs to increase the activation sparsity in the convolutional and linear layers of the network.
no code implementations • 15 Dec 2020 • Deepak Ravikumar, Sangamesh Kodge, Isha Garg, Kaushik Roy
We utilize mixup in two ways to implement Vicinal Risk Minimization.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
1 code implementation • 5 Oct 2020 • Isha Garg, Sayeed Shafayet Chowdhury, Kaushik Roy
Notably, DCT-SNN performs inference with 2-14X reduced latency compared to other state-of-the-art SNNs, while achieving comparable accuracy to their standard deep learning counterparts.
no code implementations • 27 Aug 2020 • Deboleena Roy, Indranil Chakraborty, Timur Ibrayev, Kaushik Roy
The increasing computational demand of Deep Learning has propelled research in special-purpose inference accelerators based on emerging non-volatile memory (NVM) technologies.
no code implementations • 9 Aug 2020 • Nitin Rathi, Kaushik Roy
The trained membrane leak controls the flow of input information and attenuates irrelevant inputs to increase the activation sparsity in the convolutional and dense layers of the network.
1 code implementation • 4 Aug 2020 • Deepak Ravikumar, Sangamesh Kodge, Isha Garg, Kaushik Roy
In this work, we study the effect of network architecture, initialization, optimizer, input, weight and activation quantization on transferability of adversarial samples.
no code implementations • 15 Jun 2020 • Sayeed Shafayet Chowdhury, Chankyu Lee, Kaushik Roy
While the leaky models have been argued as more bioplausible, a comparative analysis between models with and without leak from a purely computational point of view demands attention.
no code implementations • 10 Jun 2020 • Srijita Das, Sriraam Natarajan, Kaushik Roy, Ronald Parr, Kristian Kersting
We consider the problem of Approximate Dynamic Programming in relational domains.
no code implementations • 21 May 2020 • Yinghan Long, Indranil Chakraborty, Kaushik Roy
The proposed network can be deployed in a distributed manner, consisting of quantized layers and early exits at the edge and full-precision layers on the cloud.
1 code implementation • ICLR 2020 • Nitin Rathi, Gopalakrishnan Srinivasan, Priyadarshini Panda, Kaushik Roy
We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing.
no code implementations • 21 Apr 2020 • Maryam Parsa, Catherine D. Schuman, Prasanna Date, Derek C. Rose, Bill Kay, J. Parker Mitchell, Steven R. Young, Ryan Dellana, William Severa, Thomas E. Potok, Kaushik Roy
In this work, we introduce a Bayesian approach for optimizing the hyperparameters of an algorithm for training binary communication networks that can be deployed to neuromorphic hardware.
no code implementations • 27 Mar 2020 • Mustafa Ali, Akhilesh Jaiswal, Sangamesh Kodge, Amogh Agrawal, Indranil Chakraborty, Kaushik Roy
`In-memory computing' is being widely explored as a novel computing paradigm to mitigate the well known memory bottleneck.
1 code implementation • ECCV 2020 • Saima Sharmin, Nitin Rathi, Priyadarshini Panda, Kaushik Roy
Our results suggest that SNNs trained with LIF neurons and smaller number of timesteps are more robust than the ones with IF (Integrate-Fire) neurons and larger number of timesteps.
no code implementations • CVPR 2020 • Chi Nhan Duong, Thanh-Dat Truong, Kha Gia Quach, Hung Bui, Kaushik Roy, Khoa Luu
Unveiling face images of a subject given his/her high-level representations extracted from a blackbox Face Recognition engine is extremely challenging.
no code implementations • 15 Mar 2020 • Indranil Chakraborty, Mustafa Fayez Ali, Dong Eun Kim, Aayush Ankit, Kaushik Roy
Further, using the functional simulator and GENIEx, we demonstrate that an analytical model can overestimate the degradation in classification accuracy by $\ge 10\%$ on CIFAR-100 and $3. 7\%$ on ImageNet datasets compared to GENIEx.
Emerging Technologies
1 code implementation • ECCV 2020 • Chankyu Lee, Adarsh Kumar Kosta, Alex Zihao Zhu, Kenneth Chaney, Kostas Daniilidis, Kaushik Roy
Spiking Neural Networks (SNNs) serve as ideal paradigms to handle event camera outputs, but deep SNNs suffer in terms of performance due to the spike vanishing phenomenon.
no code implementations • 2 Mar 2020 • Jason M. Allred, Steven J. Spencer, Gopalakrishnan Srinivasan, Kaushik Roy
Spiking Neural Networks (SNNs) are being explored for their potential energy efficiency resulting from sparse, event-driven computations.
1 code implementation • CVPR 2020 • Bing Han, Gopalakrishnan Srinivasan, Kaushik Roy
We find that performance degradation in the converted SNN stems from using "hard reset" spiking neuron that is driven to fixed reset potential once its membrane potential exceeds the firing threshold, leading to information loss during SNN inference.
no code implementations • 25 Feb 2020 • Sai Aparna Aketi, Priyadarshini Panda, Kaushik Roy
To address this issue, we propose an ensemble of classifiers at hidden layers to enable energy efficient detection of natural errors.
1 code implementation • 23 Feb 2020 • Sai Aparna Aketi, Sourjya Roy, Anand Raghunathan, Kaushik Roy
To address all the above issues, we present a simple-yet-effective gradual channel pruning while training methodology using a novel data-driven metric referred to as feature relevance score.
1 code implementation • 23 Jan 2020 • Gobinda Saha, Isha Garg, Aayush Ankit, Kaushik Roy
A minimal number of extra dimensions required to explain the current task are added to the Core space and the remaining Residual is freed up for learning the next task.
no code implementations • 30 Oct 2019 • Priyadarshini Panda, Aparna Aketi, Kaushik Roy
Spiking Neural Networks (SNNs) may offer an energy-efficient alternative for implementing deep learning applications.
no code implementations • 29 Jun 2019 • Amogh Agrawal, Chankyu Lee, Kaushik Roy
We rank the DNN weights and kernels based on a sensitivity analysis, and re-arrange the columns such that the most sensitive kernels are mapped closer to the drivers, thereby minimizing the impact of errors on the overall accuracy.
Emerging Technologies
no code implementations • 11 Jun 2019 • Maryam Parsa, Aayush Ankit, Amirkoushyar Ziabari, Kaushik Roy
The ever increasing computational cost of Deep Neural Networks (DNN) and the demand for energy efficient hardware for DNN acceleration has made accuracy and hardware cost co-optimization for DNNs tremendously important, especially for edge devices.
1 code implementation • 4 Jun 2019 • Indranil Chakraborty, Deboleena Roy, Isha Garg, Aayush Ankit, Kaushik Roy
The `Internet of Things' has brought increased demand for AI-based edge computing in applications ranging from healthcare monitoring systems to autonomous vehicles.
1 code implementation • 4 Jun 2019 • Wachirawit Ponghiran, Gopalakrishnan Srinivasan, Kaushik Roy
We propose reinforcement learning on simple networks consisting of random connections of spiking neurons (both recurrent and feed-forward) that can learn complex tasks with very little trainable parameters.
no code implementations • 24 May 2019 • Deboleena Roy, Priyadarshini Panda, Kaushik Roy
The spiking autoencoders are benchmarked on MNIST and Fashion-MNIST and achieve very low reconstruction loss, comparable to ANNs.
no code implementations • 8 May 2019 • Priyadarshini Panda, Efstathia Soufleri, Kaushik Roy
We analyze the stability of recurrent networks, specifically, reservoir computing models during training by evaluating the eigenvalue spectra of the reservoir dynamics.
no code implementations • 7 May 2019 • Saima Sharmin, Priyadarshini Panda, Syed Shakib Sarwar, Chankyu Lee, Wachirawit Ponghiran, Kaushik Roy
In this work, we present, for the first time, a comprehensive analysis of the behavior of more bio-plausible networks, namely Spiking Neural Network (SNN) under state-of-the-art adversarial tests.
no code implementations • 15 Mar 2019 • Chankyu Lee, Syed Shakib Sarwar, Priyadarshini Panda, Gopalakrishnan Srinivasan, Kaushik Roy
Spiking Neural Networks (SNNs) have recently emerged as a prominent neural computing paradigm.
no code implementations • 11 Feb 2019 • Gopalakrishnan Srinivasan, Kaushik Roy
In addition, we introduce residual connections between the stacked convolutional layers to improve the hierarchical feature learning capability of deep SNNs.
no code implementations • 8 Feb 2019 • Priyadarshini Panda, Indranil Chakraborty, Kaushik Roy
Specifically, discretizing the input space (or allowed pixel levels from 256 values or 8-bit to 4 values or 2-bit) extensively improves the adversarial robustness of DLNs for a substantial range of perturbations for minimal loss in test accuracy.
no code implementations • 8 Feb 2019 • Jason M. Allred, Kaushik Roy
Stochastic gradient descent requires that training samples be drawn from a uniformly random distribution of the data.
no code implementations • 1 Feb 2019 • Indranil Chakraborty, Deboleena Roy, Aayush Ankit, Kaushik Roy
In this work, we propose extremely quantized hybrid network architectures with both binary and full-precision sections to emulate the classification performance of full-precision networks while ensuring significant energy efficiency and memory compression.
no code implementations • 29 Jan 2019 • Aayush Ankit, Izzat El Hajj, Sai Rahul Chalamalasetti, Geoffrey Ndu, Martin Foltin, R. Stanley Williams, Paolo Faraboschi, Wen-mei Hwu, John Paul Strachan, Kaushik Roy, Dejan S Milojicic
We also present the PUMA compiler which translates high-level code to PUMA ISA.
Emerging Technologies Hardware Architecture
no code implementations • 15 Dec 2018 • Isha Garg, Priyadarshini Panda, Kaushik Roy
We demonstrate the proposed methodology on AlexNet and VGG style networks on the CIFAR-10, CIFAR-100 and ImageNet datasets, and successfully achieve an optimized architecture with a reduction of up to 3. 8X and 9X in the number of operations and parameters respectively, while trading off less than 1% accuracy.
no code implementations • 28 Nov 2018 • Kha Gia Quach, Ngan Le, Chi Nhan Duong, Ibsa Jalata, Kaushik Roy, Khoa Luu
To demonstrate the robustness and effectiveness of each component in the proposed approach, three experiments were conducted: (i) evaluation on AffectNet database to benchmark the proposed EmoNet for recognizing facial expression; (ii) evaluation on EmotiW2018 to benchmark the proposed deep feature level fusion mechanism NVPF; and, (iii) examine the proposed TNVPF on an innovative Group-level Emotion on Crowd Videos (GECV) dataset composed of 627 videos collected from publicly available sources.
no code implementations • 31 Aug 2018 • Shubham Jain, Abhronil Sengupta, Kaushik Roy, Anand Raghunathan
We present RxNN, a fast and accurate simulation framework to evaluate large-scale DNNs on resistive crossbar systems.
1 code implementation • 5 Jul 2018 • Priyadarshini Panda, Kaushik Roy
We introduce a Noise-based prior Learning (NoL) approach for training neural networks that are intrinsically robust to adversarial attacks.
no code implementations • 1 Jul 2018 • Amogh Agrawal, Akhilesh Jaiswal, Deboleena Roy, Bing Han, Gopalakrishnan Srinivasan, Aayush Ankit, Kaushik Roy
In this paper, we demonstrate how deep binary networks can be accelerated in modified von-Neumann machines by enabling binary convolutions within the SRAM array.
Emerging Technologies
no code implementations • 13 Jun 2018 • Baibhab Chatterjee, Priyadarshini Panda, Shovan Maity, Ayan Biswas, Kaushik Roy, Shreyas Sen
In this work, we will analyze, compare and contrast existing neuron architectures with a proposed mixed-signal neuron (MS-N) in terms of performance, power and noise, thereby demonstrating the applicability of the proposed mixed-signal neuron for achieving extreme energy-efficiency in neuromorphic computing.
1 code implementation • 15 Feb 2018 • Deboleena Roy, Priyadarshini Panda, Kaushik Roy
Over the past decade, Deep Convolutional Neural Networks (DCNNs) have shown remarkable performance in most computer vision tasks.
no code implementations • 7 Feb 2018 • Abhronil Sengupta, Yuting Ye, Robert Wang, Chiao Liu, Kaushik Roy
Over the past few years, Spiking Neural Networks (SNNs) have become popular as a possible pathway to enable low-power event-driven neuromorphic hardware.
no code implementations • 5 Jan 2018 • Soumya Ukil, Swarnendu Ghosh, Sk Md Obaidullah, K. C. Santosh, Kaushik Roy, Nibaran Das
These are then used to train different CNNs to select features.
no code implementations • 26 Dec 2017 • Priyadarshini Panda, Kaushik Roy
Anatomical studies demonstrate that brain reformats input information to generate reliable responses for performing computations.
no code implementations • 7 Dec 2017 • Syed Shakib Sarwar, Aayush Ankit, Kaushik Roy
We propose an efficient training methodology and incrementally growing DCNN to learn new tasks while sharing part of the base network.
no code implementations • 24 Oct 2017 • Baibhab Chatterjee, Priyadarshini Panda, Shovan Maity, Kaushik Roy, Shreyas Sen
This work presents the design and analysis of a mixed-signal neuron (MS-N) for convolutional neural networks (CNN) and compares its performance with a digital neuron (Dig-N) in terms of operating frequency, power and noise.
no code implementations • 12 Oct 2017 • Nitin Rathi, Priyadarshini Panda, Kaushik Roy
We present a sparse SNN topology where non-critical connections are pruned to reduce the network size and the remaining critical synapses are weight quantized to accommodate for limited conductance levels.
no code implementations • 26 Aug 2017 • Aayush Ankit, Abhronil Sengupta, Kaushik Roy
Implementation of Neuromorphic Systems using post Complementary Metal-Oxide-Semiconductor (CMOS) technology based Memristive Crossbar Array (MCA) has emerged as a promising solution to enable low-power acceleration of neural networks.
no code implementations • 19 May 2017 • Akhilesh Jaiswal, Amogh Agrawal, Priyadarshini Panda, Kaushik Roy
The basic building blocks of such neuromorphic systems are neurons and synapses.
no code implementations • 12 May 2017 • Syed Shakib Sarwar, Priyadarshini Panda, Kaushik Roy
This combination creates a balanced system that gives better training performance in terms of energy and time, compared to the standalone CNN (without any Gabor kernels), in exchange for tolerable accuracy degradation.
no code implementations • 22 Mar 2017 • Priyadarshini Panda, Jason M. Allred, Shriram Ramanathan, Kaushik Roy
Against this backdrop, we present a novel unsupervised learning mechanism ASP (Adaptive Synaptic Plasticity) for improved recognition with Spiking Neural Networks (SNNs) for real time on-line learning in a dynamic environment.
no code implementations • 10 Mar 2017 • Priyadarshini Panda, Gopalakrishnan Srinivasan, Kaushik Roy
Brain-inspired learning models attempt to mimic the cortical architecture and computations performed in the neurons and synapses constituting the human brain to achieve its efficiency in cognitive tasks.
no code implementations • 20 Feb 2017 • Aayush Ankit, Abhronil Sengupta, Priyadarshini Panda, Kaushik Roy
In this paper, we propose RESPARC - a reconfigurable and energy efficient architecture built-on Memristive Crossbar Arrays (MCA) for deep Spiking Neural Networks (SNNs).
1 code implementation • 29 Sep 2016 • Akhilesh Jaiswal, Sourjya Roy, Gopalakrishnan Srinivasan, Kaushik Roy
The efficiency of the human brain in performing classification tasks has attracted considerable research interest in brain-inspired neuromorphic computing.
no code implementations • 12 Sep 2016 • Priyadarshini Panda, Aayush Ankit, Parami Wijesinghe, Kaushik Roy
We evaluate our approach for a 12-object classification task on the Caltech101 dataset and 10-object task on CIFAR-10 dataset by constructing FALCON models on the NeuE platform in 45nm technology.
no code implementations • 1 Aug 2016 • Priyadarshini Panda, Kaushik Roy
A set of binary classifiers is organized on top of the learnt hierarchy to minimize the overall test-time complexity.
no code implementations • 27 Feb 2016 • Syed Shakib Sarwar, Swagath Venkataramani, Anand Raghunathan, Kaushik Roy
Multipliers consume most of the processing energy in the digital neurons, and thereby in the hardware implementations of artificial neural networks.
no code implementations • 27 Feb 2016 • Gopalakrishnan Srinivasan, Parami Wijesinghe, Syed Shakib Sarwar, Akhilesh Jaiswal, Kaushik Roy
Our analysis on a widely used digit recognition dataset indicates that the voltage can be scaled by 200mV from the nominal operating voltage (950mV) for practically no loss (less than 0. 5%) in accuracy (22nm predictive technology).
no code implementations • 3 Feb 2016 • Priyadarshini Panda, Kaushik Roy
We present a spike-based unsupervised regenerative learning scheme to train Spiking Deep Networks (SpikeCNN) for object recognition problems using biologically realistic leaky integrate-and-fire neurons.
no code implementations • 29 Sep 2015 • Priyadarshini Panda, Abhronil Sengupta, Kaushik Roy
Deep learning neural networks have emerged as one of the most powerful classification tools for vision related applications.
no code implementations • 29 Sep 2015 • Priyadarshini Panda, Swagath Venkataramani, Abhronil Sengupta, Anand Raghunathan, Kaushik Roy
We propose a 2-stage hierarchical classification framework, with increasing levels of complexity, wherein the first stage is trained to recognize the broad representative semantic features relevant to the object of interest.