no code implementations • NAACL (CLPsych) 2022 • Adam Tsakalidis, Jenny Chim, Iman Munire Bilal, Ayah Zirikly, Dana Atzil-Slonim, Federico Nanni, Philip Resnik, Manas Gaur, Kaushik Roy, Becky Inkster, Jeff Leintz, Maria Liakata
We provide an overview of the CLPsych 2022 Shared Task, which focusses on the automatic identification of ‘Moments of Change’ in lon- gitudinal posts by individuals on social media and its connection with information regarding mental health .
no code implementations • ECCV 2020 • Bing Han, Kaushik Roy
The real-valued ReLU activations in ANN are encoded using the spike-times of the TSC neurons in the converted TSC-SNN.
no code implementations • 17 Mar 2025 • Shristi Das Biswas, Efstathia Soufleri, Arani Roy, Kaushik Roy
Training robust deep video representations has proven to be computationally challenging due to substantial decoding overheads, the enormous size of raw video streams, and their inherent high temporal redundancy.
no code implementations • 12 Mar 2025 • Manish Nagaraj, Deepak Ravikumar, Efstathia Soufleri, Kaushik Roy
Deep learning models achieve state-of-the-art performance across domains but face scalability challenges in real-time or resource-constrained scenarios.
1 code implementation • 10 Mar 2025 • Jimmy Gammell, Anand Raghunathan, Abolfazl Hashemi, Kaushik Roy
Supervised deep learning has emerged as a state-of-the-art tool for carrying out *side-channel attacks*, which exploit this leakage by learning to map power/radiation measurements throughout encryption to the sensitive data operated on during that encryption.
no code implementations • 1 Mar 2025 • Vedant Khandelwal, Kaushik Roy, Valerie Lookingbill, Ritvik Garimella, Harshul Surana, Heather Heckman, Amit Sheth
The introduction of Large Language Models (LLMs) has significantly impacted various fields, including education, for example, by enabling the creation of personalized learning materials.
no code implementations • 4 Feb 2025 • Amit Ranjan Trivedi, Sina Tayebati, Hemant Kumawat, Nastaran Darabi, Divake Kumar, Adarsh Kumar Kosta, Yeshwanth Venkatesha, Dinithi Jayasuriya, Nethmi Jayasinghe, Priyadarshini Panda, Saibal Mukhopadhyay, Kaushik Roy
Autonomous edge computing in robotics, smart cities, and autonomous vehicles relies on the seamless integration of sensing, processing, and actuation for real-time decision-making in dynamic environments.
no code implementations • 3 Feb 2025 • Marco Paul E. Apolinario, Kaushik Roy, Charlotte Frenkel
Despite relying on local mechanisms, we demonstrate performance comparable to the backpropagation through time (BPTT) algorithm, within $\sim1. 4$ accuracy points on challenging computer vision scenarios relevant at the edge, such as the IBM DVS Gesture dataset, CIFAR10-DVS, and temporal versions of CIFAR10, and CIFAR100.
no code implementations • 31 Jan 2025 • Amogh Joshi, Sourav Sanyal, Kaushik Roy
Leveraging an LLM for natural language processing, Neuro-LIFT translates human speech into high-level planning commands which are then autonomously executed using event-based neuromorphic vision and physics-driven planning.
no code implementations • 21 Jan 2025 • Adarsh Kumar Kosta, Amogh Joshi, Arjun Roy, Rohan Kumar Manna, Manish Nagaraj, Kaushik Roy
Object detection and tracking is an essential perception task for enabling fully autonomous navigation in robotic systems.
1 code implementation • 16 Jan 2025 • Shristi Das Biswas, Matthew Shreve, Xuelu Li, Prateek Singhal, Kaushik Roy
In this paper, we introduce a novel framework for progressive exemplar-driven editing with off-the-shelf diffusion models, dubbed PIXELS, to enable customization by providing granular control over edits, allowing adjustments at the pixel or region level.
no code implementations • 2 Jan 2025 • Kaushik Roy, Harshul Surana, Darssan Eswaramoorthi, Yuxin Zi, Vedant Palit, Ritvik Garimella, Amit Sheth
Large language models (LLMs) are increasingly attracting the attention of healthcare professionals for their potential to assist in diagnostic assessments, which could alleviate the strain on the healthcare system caused by a high patient load and a shortage of providers.
1 code implementation • 18 Dec 2024 • Utkarsh Saxena, Sayeh Sharify, Kaushik Roy, Xin Wang
By means of principal component analysis (PCA), it identifies a low-rank subspace (in practice 1/8 of the hidden dimension) in which activation variances are highest, and keep the coefficients within this subspace in high precision, e. g. 8-bit, while quantizing the rest to 4-bit.
no code implementations • 4 Dec 2024 • Yinghan Long, Kaushik Roy
To address this limitation, we present Panoptic Diffusion Model (PDM), the first model designed to generate both images and panoptic segmentation maps concurrently.
no code implementations • 22 Nov 2024 • Prajna G. Malettira, Shubham Negi, Wachirawit Ponghiran, Kaushik Roy
We demonstrate the effectiveness of our approach on four event-based datasets: DSEC-flow for optical flow estimation, DVS128 Gesture for hand gesture recognition and Spiking Heidelberg Digits (SHD) and Spiking Speech Commands (SSC) for speech recognition.
no code implementations • 21 Nov 2024 • Marco Paul E. Apolinario, Kaushik Roy
Continual learning, or the ability to progressively integrate new concepts, is fundamental to intelligent beings, enabling adaptability in dynamic environments.
Ranked #1 on
Continual Learning
on 5-Datasets
no code implementations • 5 Nov 2024 • Deepika Sharma, Shubham Negi, Trishit Dutta, Amogh Agrawal, Kaushik Roy
3) A zero-skipping mechanism for sparse inputs significantly reduces energy usage by leveraging the inherent sparsity of spikes without introducing high overheads for low sparsity.
1 code implementation • 29 Oct 2024 • Jimmy Gammell, Anand Raghunathan, Kaushik Roy
Supervised deep learning has emerged as an effective tool for carrying out power side-channel attacks on cryptographic implementations.
no code implementations • 30 Sep 2024 • Kaushik Roy, Akila Dissanayake, Brendan Tidd, Peyman Moghadam
Lifelong imitation learning for manipulation tasks poses significant challenges due to distribution shifts that occur in incremental learning steps.
no code implementations • 16 Sep 2024 • Sourav Sanyal, Kaushik Roy
By continuously identifying potentially risky observations, the system performs prediction in real time about unsafe conditions and proactively adjusts its control actions to maintain safe navigation throughout the trajectory.
no code implementations • 16 Sep 2024 • Amogh Joshi, Adarsh Kumar Kosta, Kaushik Roy
Modern RL algorithms such as Deep Q Learning and Soft Actor-Critic attempt to remedy this shortcoming but can not provide the explainability required in applications such as autonomous robotics.
1 code implementation • 27 Aug 2024 • Arjun Roy, Kaushik Roy
DCT-CryptoNets also demonstrates superior scalability to RGB-based networks by further reducing computational cost as image size increases.
no code implementations • 11 Aug 2024 • Arkapravo Ghosh, Hemkar Reddy Sadana, Mukut Debnath, Panthadip Maji, Shubham Negi, Sumeet Gupta, Mrigank Sharad, Kaushik Roy
Recently reported designs reveal that the ADCs required for reading out the MVM results, consume more than 85% of the total compute power and also dominate the area, thereby eschewing the benefits of the IMC scheme.
1 code implementation • 10 Aug 2024 • Utkarsh Saxena, Gobinda Saha, Sakshi Choudhary, Kaushik Roy
To address this, we propose Eigen Attention, which performs the attention operation in a low-rank space, thereby reducing the KV cache memory overhead.
no code implementations • 31 Jul 2024 • Narendra Singh Dhakad, Yuvnish Malhotra, Santosh Kumar Vishvakarma, Kaushik Roy
This paper introduces a Scalable Hierarchical Aware Convolutional Neural Network (SHA-CNN) model architecture for Edge AI applications.
no code implementations • 26 Jul 2024 • Amit Sheth, Vishal Pallagani, Kaushik Roy
Generative AI, especially via Large Language Models (LLMs), has transformed content creation across text, images, and music, showcasing capabilities in following instructions through prompting, largely facilitated by instruction tuning.
no code implementations • 11 Jul 2024 • Sanaullah, Kaushik Roy, Ulrich Rückert, Thorsten Jungeblut
In this article, we propose a novel standalone hybrid Spiking-Convolutional Neural Network (SC-NN) model and test on using image inpainting tasks.
no code implementations • 3 Jul 2024 • Deepak Ravikumar, Efstathia Soufleri, Kaushik Roy
In this paper, we explore the properties of loss curvature with respect to input data in deep neural networks.
no code implementations • 2 Jul 2024 • Adnan Mehonic, Daniele Ielmini, Kaushik Roy, Onur Mutlu, Shahar Kvatinsky, Teresa Serrano-Gotarredona, Bernabe Linares-Barranco, Sabina Spiga, Sergey Savelev, Alexander G Balanov, Nitin Chawla, Giuseppe Desoli, Gerardo Malavena, Christian Monzio Compagnoni, Zhongrui Wang, J Joshua Yang, Ghazi Sarwat Syed, Abu Sebastian, Thomas Mikolajick, Beatriz Noheda, Stefan Slesazeck, Bernard Dieny, Tuo-Hung, Hou, Akhil Varri, Frank Bruckerhoff-Pluckelmann, Wolfram Pernice, Xixiang Zhang, Sebastian Pazos, Mario Lanza, Stefan Wiefels, Regina Dittmann, Wing H Ng, Mark Buckwell, Horatio RJ Cox, Daniel J Mannion, Anthony J Kenyon, Yingming Lu, Yuchao Yang, Damien Querlioz, Louis Hutin, Elisa Vianello, Sayeed Shafayet Chowdhury, Piergiulio Mannocci, Yimao Cai, Zhong Sun, Giacomo Pedretti, John Paul Strachan, Dmitri Strukov, Manuel Le Gallo, Stefano Ambrogio, Ilia Valov, Rainer Waser
The roadmap is organized into several thematic sections, outlining current computing challenges, discussing the neuromorphic computing approach, analyzing mature and currently utilized technologies, providing an overview of emerging technologies, addressing material challenges, exploring novel computing concepts, and finally examining the maturity level of emerging technologies while determining the next essential steps for their advancement.
1 code implementation • 2 Jul 2024 • Efstathia Soufleri, Deepak Ravikumar, Kaushik Roy
Additionally, WISE improves accuracy by up to 4. 28% and 9. 30% on UCF-101 and HMDB-51, respectively.
1 code implementation • 24 May 2024 • Marco Paul E. Apolinario, Arani Roy, Kaushik Roy
Training deep neural networks (DNNs) using traditional backpropagation (BP) presents challenges in terms of computational complexity and energy consumption, particularly for on-device learning where computational resources are limited.
no code implementations • 22 May 2024 • Sakshi Choudhary, Sai Aparna Aketi, Kaushik Roy
Decentralized training enables learning with distributed datasets generated at different locations without relying on a central server.
no code implementations • 4 May 2024 • Soumyadeep Chandra, Sayeed Shafayet Chowdhury, Courtney Yong, Chandru P. Sundaram, Kaushik Roy
Surgical action localization is a challenging computer vision problem.
1 code implementation • 9 Apr 2024 • Sai Aparna Aketi, Abolfazl Hashemi, Kaushik Roy
Decentralized learning is crucial in supporting on-device learning over large distributed datasets, eliminating the need for a central server.
no code implementations • 29 Mar 2024 • Amitangshu Mukherjee, Timur Ibrayev, Kaushik Roy
Current Deep Neural Networks are vulnerable to adversarial examples, which alter their predictions by adding carefully crafted noise.
no code implementations • 24 Mar 2024 • Timur Ibrayev, Amitangshu Mukherjee, Sai Aparna Aketi, Kaushik Roy
Specifically, the proposed framework models the following mechanisms: 1) ventral (what) stream focusing on the input regions perceived by the fovea part of an eye (foveation), 2) dorsal (where) stream providing visual guidance, and 3) iterative processing of the two streams to calibrate visual focus and process the sequence of focused image patches.
no code implementations • 23 Mar 2024 • Shrihari Sridharan, Surya Selvam, Kaushik Roy, Anand Raghunathan
On several state-of-art networks for a range of autonomous navigation tasks, Ev-Edge achieves 1. 28x-2. 05x improvements in latency and 1. 23x-2. 15x in energy over an all-GPU implementation on the NVIDIA Jetson Xavier AGX platform for single-task execution scenarios.
no code implementations • 19 Mar 2024 • Timur Ibrayev, Isha Garg, Indranil Chakraborty, Kaushik Roy
sparsity is then achieved by regularizing the variance of $L_{0}$ norms of neighboring columns within the same crossbar.
1 code implementation • 13 Mar 2024 • Sangamesh Kodge, Deepak Ravikumar, Gobinda Saha, Kaushik Roy
SAP applied to the CIFAR dataset with 25% synthetic corruption show upto 6% generalization improvements.
1 code implementation • 5 Mar 2024 • Sai Aparna Aketi, Sakshi Choudhary, Kaushik Roy
State-of-the-art decentralized learning algorithms typically require the data distribution to be Independent and Identically Distributed (IID).
no code implementations • 28 Feb 2024 • Deepak Ravikumar, Efstathia Soufleri, Abolfazl Hashemi, Kaushik Roy
Second, we present a novel insight showing that input loss curvature is upper-bounded by the differential privacy parameter.
no code implementations • 15 Feb 2024 • Kang He, Yinghan Long, Kaushik Roy
Prompt-based learning is susceptible to intrinsic bias present in pre-trained language models (LMs), leading to sub-optimal performance in prompt-based zero/few-shot settings.
no code implementations • 31 Jan 2024 • Chun Tao, Timur Ibrayev, Kaushik Roy
To mitigate this gap, we introduce the concept of "image grammar", consisting of "image semantics" and "image syntax", to denote the semantics of parts or patches of an image and the order in which these parts are arranged to create a meaningful object.
no code implementations • 30 Jan 2024 • Sayeed Shafayet Chowdhury, Soumyadeep Chandra, Kaushik Roy
Through our experiments, we discover an intriguing property of DNNs where we observe that state-of-the-art convolutional neural networks, as well as vision transformers, fail to discriminate between syntactically correct and incorrect images when trained on only correct ones.
no code implementations • 4 Jan 2024 • Vishal Pallagani, Kaushik Roy, Bharath Muppasani, Francesco Fabiano, Andrea Loreggia, Keerthiram Murugesan, Biplav Srivastava, Francesca Rossi, Lior Horesh, Amit Sheth
Automated Planning and Scheduling is among the growing areas in Artificial Intelligence (AI) where mention of LLMs has gained popularity.
1 code implementation • 29 Dec 2023 • Kanak Raj, Kaushik Roy, Vamshi Bonagiri, Priyanshul Govil, Krishnaprasad Thirunarayanan, Manas Gaur
To enhance the relevance and comprehensiveness of personalized responses, we propose using a two-step approach that involves (1) selectively integrating user personas and (2) contextualizing the response with supplementing information from a background knowledge source.
no code implementations • 26 Dec 2023 • Tanvi Sharma, Mustafa Ali, Indranil Chakraborty, Kaushik Roy
We believe the proposed work provides insights into what type of CiM to use, and when and where to optimally integrate it in the cache hierarchy for efficient matrix multiplication.
1 code implementation • NeurIPS 2023 Workshop on Gaze Meets ML, Proceedings of Machine Learning Research 2023 • Timur Ibrayev, Manish Nagaraj, Amitangshu Mukherjee, Kaushik Roy
While foveation enables it to process different regions of the input with variable degrees of detail, saccades allow it to change the focus point of such foveated regions.
no code implementations • 15 Dec 2023 • Kaushik Roy, Vedant Khandelwal, Harshul Surana, Valerie Vera, Amit Sheth, Heather Heckman
Systematic reviews (SRs) - the librarian-assisted literature survey of scholarly articles takes time and requires significant human resources.
no code implementations • 15 Dec 2023 • Amit Sheth, Kaushik Roy
The rapid progression of Artificial Intelligence (AI) systems, facilitated by the advent of Large Language Models (LLMs), has resulted in their widespread application to provide human assistance across diverse industries.
no code implementations • 15 Dec 2023 • Yuxin Zi, Hariram Veeramani, Kaushik Roy, Amit Sheth
Natural language understanding (NLU) using neural network pipelines often requires additional context that is not solely present in the input data.
1 code implementation • 1 Dec 2023 • Sangamesh Kodge, Gobinda Saha, Kaushik Roy
We demonstrate our algorithm's efficacy on ImageNet using a Vision Transformer with only $\sim 1. 5\%$ drop in retain accuracy compared to the original model while maintaining under $1\%$ accuracy on the unlearned class samples.
no code implementations • 23 Nov 2023 • Melodee Montgomery, Prosenjit Chatterjee, John Jenkins, Kaushik Roy
When we combine the new features with the existing ones, SVM and kNN achieved the classification accuracy of 94. 7% and 94. 6%, respectively.
no code implementations • 23 Nov 2023 • Sultan Almalki, Prosenjit Chatterjee, Kaushik Roy
In authentication mode, all three classifiers achieved the highest accuracy (ACC) and Area Under Curve (AUC) from scenario B using the point and click action data: (Decision Tree ACC:87. 6%, AUC:90. 3%), (K-Nearest Neighbors ACC:99. 3%, AUC:99. 9%), and (Random Forest ACC:89. 9%, AUC:92. 5%).
1 code implementation • 23 Nov 2023 • Sumit Dalal, Deepa Tilwani, Kaushik Roy, Manas Gaur, Sarika Jain, Valerie Shalin, Amit Sheth
We develop such a system in the context of MH using clinical practice guidelines (CPG) for diagnosing depression, a mental health disorder of global concern.
no code implementations • 23 Nov 2023 • Justin Spencer, Deborah Lawrence, Prosenjit Chatterjee, Kaushik Roy, Albert Esterline, Jung-Hee Kim
The second uses a shallow CNN based on a modified Spoofnet architecture, which is trained normally.
no code implementations • 23 Nov 2023 • Prosenjit Chatterjee, Alex Yalchin, Joseph Shelton, Kaushik Roy, Xiaohong Yuan, Kossi D. Edoh
The bio-metric images, especially the iris and face, are vulnerable to different presentation attacks.
no code implementations • 11 Nov 2023 • Aidin Shiri, Kaushik Roy, Amit Sheth, Manas Gaur
Fine-tuning pre-trained foundational language models (FLM) for specific tasks is often impractical, especially for resource-constrained devices.
no code implementations • 5 Nov 2023 • Kavitha Kunku, ANK Zaman, Kaushik Roy
Vicious assaults, malware, and various ransomware pose a cybersecurity threat, causing considerable damage to computer structures, servers, and mobile and web apps across various industries and businesses.
1 code implementation • 24 Oct 2023 • Sai Aparna Aketi, Kaushik Roy
The current state-of-the-art decentralized learning algorithms mostly assume the data distribution to be Independent and Identically Distributed (IID).
1 code implementation • 11 Sep 2023 • Abhisek Tiwari, Muhammed Sinan, Kaushik Roy, Amit Sheth, Sriparna Saha, Pushpak Bhattacharyya
We found that the dialogue generation models trained with SemTextualLogueloss attained superior performance compared to the traditional cross-entropy loss function.
1 code implementation • 31 Jul 2023 • Kaushik Roy, Christian Simon, Peyman Moghadam, Mehrtash Harandi
To mitigate forgetting prior knowledge, we propose a novel knowledge distillation technique that takes into the account the manifold structure of the latent/output space of a neural network in learning novel tasks.
1 code implementation • 31 Jul 2023 • Kaushik Roy, Peyman Moghadam, Mehrtash Harandi
To address the problem, we propose a distillation strategy named L3DMC that operates on mixed-curvature spaces to preserve the already-learned knowledge by modeling and maintaining complex geometrical structures.
no code implementations • 11 Jul 2023 • Isha Garg, Deepak Ravikumar, Kaushik Roy
Second, we inject corrupted samples which are memorized by the network, and show that these are learned with high curvature.
1 code implementation • 27 Jun 2023 • Marco Paul E. Apolinario, Kaushik Roy
Spiking Neural Networks (SNNs) are biologically plausible models that have been identified as potentially apt for deploying energy-efficient intelligence at the edge, particularly for sequential learning tasks.
Ranked #1 on
Event-based Optical Flow
on MVSEC
no code implementations • 24 Jun 2023 • Yuxin Zi, Kaushik Roy, Vignesh Narayanan, Manas Gaur, Amit Sheth
Crowdsourced and expert-curated knowledge graphs such as ConceptNet are designed to capture the meaning of words from a compact set of well-defined contexts.
no code implementations • 23 Jun 2023 • Kaushik Roy, Yuxin Zi, Vignesh Narayanan, Manas Gaur, Amit Sheth
However, the ad-hoc nature of existing methods makes it difficult to properly analyze the effects of knowledge infusion on the many moving parts or components of a transformer.
no code implementations • 16 Jun 2023 • Kaushik Roy, Yuxin Zi, Manas Gaur, Jinendra Malekar, Qi Zhang, Vignesh Narayanan, Amit Sheth
In this study, we introduce Process Knowledge-infused Learning (PK-iL), a new learning paradigm that layers clinical process knowledge structures on language model outputs, enabling clinician-friendly explanations of the underlying language model predictions.
no code implementations • 5 Jun 2023 • Shubham Negi, Deepika Sharma, Adarsh Kumar Kosta, Kaushik Roy
This is due to their sparse and asynchronous event outputs.
1 code implementation • 1 Jun 2023 • Revathy Venkataramanan, Kaushik Roy, Kanak Raj, Renjith Prasad, Yuxin Zi, Vignesh Narayanan, Amit Sheth
In this study, we explore the use of generative AI methods to extend current food computation models, primarily involving the analysis of nutrition and ingredients, to also incorporate cooking actions (e. g., add salt, fry the meat, boil the vegetables, etc.).
1 code implementation • 24 May 2023 • Yinghan Long, Sayeed Shafayet Chowdhury, Kaushik Roy
The segmented attention and lightweight RAF neurons ensure the efficiency of the proposed transformer.
Ranked #1 on
Text Summarization
on XSum
no code implementations • 22 May 2023 • Amogh Joshi, Adarsh Kosta, Wachirawit Ponghiran, Manish Nagaraj, Kaushik Roy
The ability of resource-constrained biological systems such as fruitflies to perform complex and high-speed maneuvers in cluttered environments has been one of the prime sources of inspiration for developing vision-based autonomous systems.
no code implementations • 13 May 2023 • Kaushik Roy, Manas Gaur, Misagh Soltani, Vipula Rawte, Ashwin Kalyan, Amit Sheth
LMs augmented with ProKnow guided method generated 89% safer questions in the depression and anxiety domain.
1 code implementation • NeurIPS 2023 • Sai Aparna Aketi, Abolfazl Hashemi, Kaushik Roy
Decentralized learning enables the training of deep learning models over large distributed datasets generated at different locations, without the need for a central server.
no code implementations • 8 May 2023 • Kaushik Roy, Tarun Garg, Vedant Palit, Yuxin Zi, Vignesh Narayanan, Amit Sheth
However, they do not ascribe object and concept-level meaning and semantics to the learned stochastic patterns such as those described in knowledge graphs.
no code implementations • 1 May 2023 • Amit Sheth, Kaushik Roy, Manas Gaur
Humans interact with the environment using a combination of perception - transforming sensory inputs from their environment into symbols, and cognition - mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning.
no code implementations • 14 Apr 2023 • Xuan-Bac Nguyen, Chi Nhan Duong, Marios Savvides, Kaushik Roy, Hugh Churchill, Khoa Luu
Promoting fairness for deep clustering models in unsupervised clustering settings to reduce demographic bias is a challenging goal.
no code implementations • 9 Apr 2023 • Deepak Ravikumar, Gobinda Saha, Sai Aparna Aketi, Kaushik Roy
The goal of IDKD is to homogenize the data distribution across the nodes.
no code implementations • 31 Mar 2023 • Kaushik Roy, Vedant Khandelwal, Raxit Goswami, Nathan Dolbir, Jinendra Malekar, Amit Sheth
After the pandemic, artificial intelligence (AI) powered support for mental health care has become increasingly important.
1 code implementation • 27 Mar 2023 • Sakshi Choudhary, Sai Aparna Aketi, Gobinda Saha, Kaushik Roy
Decentralized learning allows serverless training with spatially distributed data.
no code implementations • 13 Mar 2023 • Shrihari Sridharan, Jacob R. Stevens, Kaushik Roy, Anand Raghunathan
Transformers have achieved great success in a wide variety of natural language processing (NLP) tasks due to the attention mechanism, which assigns an importance score for every word relative to other words in a sequence.
1 code implementation • 2 Feb 2023 • Gobinda Saha, Kaushik Roy
In neural networks, continual learning results in gradient interference among sequential tasks, leading to catastrophic forgetting of old tasks while learning new ones.
no code implementations • CVPR 2023 • Isha Garg, Kaushik Roy
SLo-curves identifies the samples with low curvatures as being more data-efficient and trains on them with an additional regularizer that penalizes high curvature of the loss surface in their vicinity.
no code implementations • 19 Nov 2022 • Shristi Das Biswas, Adarsh Kosta, Chamika Liyanagedera, Marco Apolinario, Kaushik Roy
Event cameras detect changes in per-pixel intensity to generate asynchronous `event streams'.
Ranked #6 on
Semantic Segmentation
on DSEC
no code implementations • 3 Nov 2022 • Marco Paul E. Apolinario, Adarsh Kumar Kosta, Utkarsh Saxena, Kaushik Roy
Spiking Neural Networks (SNNs) are bio-plausible models that hold great potential for realizing energy-efficient implementations of sequential tasks on resource-constrained edge devices.
Ranked #12 on
Gesture Recognition
on DVS128 Gesture
no code implementations • 16 Oct 2022 • Rawshan Ara Mowri, Madhuri Siddula, Kaushik Roy
This research work utilizes the frequencies of different API calls to detect and classify ransomware families.
no code implementations • 9 Oct 2022 • Kaushik Roy, Yuxin Zi, Vignesh Narayanan, Manas Gaur, Amit Sheth
Domain-specific language understanding requires integrating multiple pieces of relevant contextual information.
no code implementations • 6 Oct 2022 • Efstathia Soufleri, Gobinda Saha, Kaushik Roy
We evaluate our method on image classification dataset (CIFAR10) and show that our synthetic data can be used for training networks from scratch, producing reasonable classification performance.
2 code implementations • 3 Oct 2022 • Manish Nagaraj, Chamika Mihiranga Liyanagedera, Kaushik Roy
Our technique consists of a lightweight spiking neural architecture that is able to separate events based on the speed of the corresponding objects.
1 code implementation • ICCV 2023 • Wachirawit Ponghiran, Chamika Mihiranga Liyanagedera, Kaushik Roy
In this work, we show that a temporally dense flow estimation at 100Hz can be achieved by treating the flow estimation as a sequential problem using two different variants of recurrent networks - Long-short term memory (LSTM) and spiking neural network (SNN).
1 code implementation • 28 Sep 2022 • Sai Aparna Aketi, Sangamesh Kodge, Kaushik Roy
Our experiments demonstrate that \textit{NGC} and \textit{CompNGC} outperform (by $0-6\%$) the existing SoTA decentralized learning algorithm over non-IID data with significantly less compute and memory requirements.
no code implementations • 21 Sep 2022 • Adarsh Kumar Kosta, Kaushik Roy
Spiking Neural Networks (SNNs), with their neuro-inspired event-driven processing can efficiently handle such asynchronous data, while neuron models such as the leaky-integrate and fire (LIF) can keep track of the quintessential timing information contained in the inputs.
no code implementations • 19 Sep 2022 • Sourav Sanyal, Kaushik Roy
On the other hand, a regular data loss is used for adapting to residual disturbances (non-parametric uncertainties), unaccounted during mathematical modelling.
no code implementations • 18 Jun 2022 • Tarun Garg, Kaushik Roy, Amit Sheth
Knowledge Graphs are a great resource to capture semantic knowledge in terms of entities and relationships between the entities.
no code implementations • 9 Jun 2022 • Amit Sheth, Manas Gaur, Kaushik Roy, Revathy Venkataraman, Vedant Khandelwal
For such applications, in addition to data and domain knowledge, the AI systems need to have access to and use the Process Knowledge, an ordered set of steps that the AI system needs to use or adhere to.
no code implementations • 3 Jun 2022 • Wilfried Haensch, Anand Raghunathan, Kaushik Roy, Bhaswar Chakrabart, Charudatta M. Phatak, Cheng Wang, Supratik Guha
In the second part, we review what is knows about the different new non-volatile memory materials and devices suited for compute in-memory, and discuss the outlook and challenges.
1 code implementation • NAACL (CLPsych) 2022 • Shrey Gupta, Anmol Agarwal, Manas Gaur, Kaushik Roy, Vignesh Narayanan, Ponnurangam Kumaraguru, Amit Sheth
We demonstrate the challenge of using existing datasets to train a DLM for generating FQs that adhere to clinical process knowledge.
no code implementations • 6 May 2022 • Deepak Ravikumar, Kaushik Roy
Therefore, applying a single threshold for all classes is not ideal since the same similarity score represents different uncertainties for different classes.
no code implementations • 26 Apr 2022 • Kaushik Roy, Manas Gaur, Qi Zhang, Amit Sheth
Improving the performance and natural language explanations of deep learning algorithms is a priority for adoption by humans in the real world.
no code implementations • 25 Mar 2022 • Md Mazharul Islam, Shamiul Alam, Md Shafayat Hossain, Kaushik Roy, Ahmedullah Aziz
The revolution in artificial intelligence (AI) brings up an enormous storage and data processing requirement.
no code implementations • 21 Jan 2022 • Isha Garg, Manish Nagaraj, Kaushik Roy
This is done via a central server that aggregates learning in the form of weight updates.
no code implementations • 20 Dec 2021 • Amitangshu Mukherjee, Isha Garg, Kaushik Roy
We show that learning in this structured hierarchical manner results in networks that are more robust against subpopulation shifts, with an improvement up to 3\% in terms of accuracy and up to 11\% in terms of graphical distance over standard models on subpopulation shift benchmarks.
no code implementations • 17 Nov 2021 • Jason Allred, Kaushik Roy
Converted SNNs function sufficiently well because the mean pre-firing membrane potential of a spiking neuron is proportional to the dot product of the input rate vector and the neuron weight vector, similar to the functionality of a non-spiking network.
1 code implementation • 17 Nov 2021 • Sai Aparna Aketi, Sangamesh Kodge, Kaushik Roy
In this paper, we propose and show the convergence of low precision decentralized training that aims to reduce the computational complexity and communication cost of decentralized training.
no code implementations • 18 Oct 2021 • Sangamesh Kodge, Kaushik Roy
Experiments on the probing task from SentEval dataset show that our model performs up to $4. 65\%$ better in accuracy than the baseline with an average improvement of $2. 67\%$ on the semantic tasks.
1 code implementation • 1 Oct 2021 • Sayeed Shafayet Chowdhury, Nitin Rathi, Kaushik Roy
We achieve top-1 accuracy of 93. 05%, 70. 15% and 67. 71% on CIFAR-10, CIFAR-100 and ImageNet, respectively using VGG16, with just 1 timestep.
no code implementations • 29 Sep 2021 • Deepak Ravikumar, Sangamesh Kodge, Isha Garg, Kaushik Roy
This reduces the separability of in-distribution data from OoD data.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
1 code implementation • 19 Sep 2021 • Chun Tao, Deboleena Roy, Indranil Chakraborty, Kaushik Roy
First, we study the noise stability of such networks on unperturbed inputs and observe that internal activations of adversarially trained networks have lower Signal-to-Noise Ratio (SNR), and are sensitive to noise compared to vanilla networks.
no code implementations • 16 Sep 2021 • Adarsh Kumar Kosta, Malik Aqeel Anwar, Priyadarshini Panda, Arijit Raychowdhury, Kaushik Roy
To address this challenge, we propose a reconfigurable architecture with preemptive exits for efficient deep RL (RAPID-RL).
no code implementations • 14 Sep 2021 • Bing Han, Cheng Wang, Kaushik Roy
To address these challenges, we propose a novel neuron model that has cosine activation with a time varying component for sequential processing.
no code implementations • 14 Sep 2021 • Yinghan Long, Indranil Chakraborty, Gopalakrishnan Srinivasan, Kaushik Roy
Only data with high probabilities of belonging to hard classes would be sent to the extension block for prediction.
1 code implementation • 10 Sep 2021 • Gobinda Saha, Kaushik Roy
One way to enable such learning is to store past experiences in the form of input examples in episodic memory and replay them when learning new tasks.
1 code implementation • 4 Sep 2021 • Wachirawit Ponghiran, Kaushik Roy
We show that SNNs can be trained for sequential tasks and propose modifications to a network of LIF neurons that enable internal states to learn long sequences and make their inherent recurrence resilient to the vanishing gradient problem.
no code implementations • 2 Aug 2021 • Amit Sheth, Manas Gaur, Kaushik Roy, Keyur Faldu
To understand and validate an AI system's outcomes (such as classification, recommendations, predictions), that lead to developing trust in the AI system, it is necessary to involve explicit domain knowledge that humans understand and use.
Decision Making
Explainable Artificial Intelligence (XAI)
+1
no code implementations • 25 Jun 2021 • Kaushik Roy, Qi Zhang, Manas Gaur, Amit Sheth
Contextual Bandits find important use cases in various real-life scenarios such as online advertising, recommendation systems, healthcare, etc.
no code implementations • 23 Jun 2021 • Shubham Negi, Indranil Chakraborty, Aayush Ankit, Kaushik Roy
The hardware efficiency (energy, latency and area) as well as application accuracy (considering device and circuit non-idealities) of DNNs mapped to such hardware are co-dependent on network parameters, such as kernel size, depth etc.
no code implementations • 13 May 2021 • Nathan Dolbir, Triyasha Dastidar, Kaushik Roy
AI chatbots have made vast strides in technology improvement in recent years and are already operational in many industries.
no code implementations • 12 May 2021 • Manas Gaur, Kaushik Roy, Aditya Sharma, Biplav Srivastava, Amit Sheth
During the ongoing COVID-19 crisis, subreddits on Reddit, such as r/Coronavirus saw a rapid growth in user's requests for help (support seekers - SSs) including individuals with varying professions and experiences with diverse perspectives on care (support providers - SPs).
no code implementations • 26 Apr 2021 • Sayeed Shafayet Chowdhury, Isha Garg, Kaushik Roy
Moreover, they require 8-14X lesser compute energy compared to their unpruned standard deep learning counterparts.
no code implementations • 29 Mar 2021 • Usha Lokala, Francois Lamy, Triyasha Ghosh Dastidar, Kaushik Roy, Raminta Daniulaityte, Srinivasan Parthasarathy, Amit Sheth
However, the lack of evidence on the relationship has resulted in opioids being largely inaccessible through legal means.
no code implementations • 19 Mar 2021 • Chankyu Lee, Adarsh Kumar Kosta, Kaushik Roy
Standard frame-based cameras that sample light intensity frames are heavily impacted by motion blur for high-speed motion and fail to perceive scene accurately when the dynamic range is high.
1 code implementation • ICLR 2021 • Gobinda Saha, Isha Garg, Kaushik Roy
The ability to learn continually without forgetting the past tasks is a desired attribute for artificial learning systems.
no code implementations • 11 Feb 2021 • Kaushik Roy, Qi Zhang, Manas Gaur, Amit Sheth
To this end, we introduce a mathematical framework for KIPG methods that can (a) induce relevant feature counts over multi-relational features of the world, (b) handle latent non-homogeneous counts as hidden variables that are linear combinations of kernelized aggregates over the features, and (b) infuse knowledge as functional constraints in a principled manner.
no code implementations • 1 Feb 2021 • Kaushik Roy, Usha Lokala, Vedant Khandelwal, Amit Sheth
With strong marketing advocacy of the benefits of cannabis use for improved mental health, cannabis legalization is a priority among legislators.
no code implementations • ICCV 2021 • Isha Garg, Sayeed Shafayet Chowdhury, Kaushik Roy
Notably, DCT-SNN performs inference with 2-14X reduced latency compared to other state-of-the-art SNNs, while achieving comparable accuracy to their standard deep learning counterparts.
no code implementations • 1 Jan 2021 • Nitin Rathi, Kaushik Roy
The trained membrane leak controls the flow of input information and attenuates irrelevant inputs to increase the activation sparsity in the convolutional and linear layers of the network.
no code implementations • 15 Dec 2020 • Deepak Ravikumar, Sangamesh Kodge, Isha Garg, Kaushik Roy
We utilize mixup in two ways to implement Vicinal Risk Minimization.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
1 code implementation • 5 Oct 2020 • Isha Garg, Sayeed Shafayet Chowdhury, Kaushik Roy
Notably, DCT-SNN performs inference with 2-14X reduced latency compared to other state-of-the-art SNNs, while achieving comparable accuracy to their standard deep learning counterparts.
no code implementations • 27 Aug 2020 • Deboleena Roy, Indranil Chakraborty, Timur Ibrayev, Kaushik Roy
The increasing computational demand of Deep Learning has propelled research in special-purpose inference accelerators based on emerging non-volatile memory (NVM) technologies.
no code implementations • 9 Aug 2020 • Nitin Rathi, Kaushik Roy
The trained membrane leak controls the flow of input information and attenuates irrelevant inputs to increase the activation sparsity in the convolutional and dense layers of the network.
1 code implementation • 4 Aug 2020 • Deepak Ravikumar, Sangamesh Kodge, Isha Garg, Kaushik Roy
In this work, we study the effect of network architecture, initialization, optimizer, input, weight and activation quantization on transferability of adversarial samples.
no code implementations • 15 Jun 2020 • Sayeed Shafayet Chowdhury, Chankyu Lee, Kaushik Roy
While the leaky models have been argued as more bioplausible, a comparative analysis between models with and without leak from a purely computational point of view demands attention.
no code implementations • 10 Jun 2020 • Srijita Das, Sriraam Natarajan, Kaushik Roy, Ronald Parr, Kristian Kersting
We consider the problem of Approximate Dynamic Programming in relational domains.
no code implementations • 21 May 2020 • Yinghan Long, Indranil Chakraborty, Kaushik Roy
The proposed network can be deployed in a distributed manner, consisting of quantized layers and early exits at the edge and full-precision layers on the cloud.
1 code implementation • ICLR 2020 • Nitin Rathi, Gopalakrishnan Srinivasan, Priyadarshini Panda, Kaushik Roy
We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time steps for input processing.
no code implementations • 21 Apr 2020 • Maryam Parsa, Catherine D. Schuman, Prasanna Date, Derek C. Rose, Bill Kay, J. Parker Mitchell, Steven R. Young, Ryan Dellana, William Severa, Thomas E. Potok, Kaushik Roy
In this work, we introduce a Bayesian approach for optimizing the hyperparameters of an algorithm for training binary communication networks that can be deployed to neuromorphic hardware.
no code implementations • 27 Mar 2020 • Mustafa Ali, Akhilesh Jaiswal, Sangamesh Kodge, Amogh Agrawal, Indranil Chakraborty, Kaushik Roy
`In-memory computing' is being widely explored as a novel computing paradigm to mitigate the well known memory bottleneck.
1 code implementation • ECCV 2020 • Saima Sharmin, Nitin Rathi, Priyadarshini Panda, Kaushik Roy
Our results suggest that SNNs trained with LIF neurons and smaller number of timesteps are more robust than the ones with IF (Integrate-Fire) neurons and larger number of timesteps.
no code implementations • CVPR 2020 • Chi Nhan Duong, Thanh-Dat Truong, Kha Gia Quach, Hung Bui, Kaushik Roy, Khoa Luu
Unveiling face images of a subject given his/her high-level representations extracted from a blackbox Face Recognition engine is extremely challenging.
no code implementations • 15 Mar 2020 • Indranil Chakraborty, Mustafa Fayez Ali, Dong Eun Kim, Aayush Ankit, Kaushik Roy
Further, using the functional simulator and GENIEx, we demonstrate that an analytical model can overestimate the degradation in classification accuracy by $\ge 10\%$ on CIFAR-100 and $3. 7\%$ on ImageNet datasets compared to GENIEx.
Emerging Technologies
1 code implementation • ECCV 2020 • Chankyu Lee, Adarsh Kumar Kosta, Alex Zihao Zhu, Kenneth Chaney, Kostas Daniilidis, Kaushik Roy
Spiking Neural Networks (SNNs) serve as ideal paradigms to handle event camera outputs, but deep SNNs suffer in terms of performance due to the spike vanishing phenomenon.
no code implementations • 2 Mar 2020 • Jason M. Allred, Steven J. Spencer, Gopalakrishnan Srinivasan, Kaushik Roy
Spiking Neural Networks (SNNs) are being explored for their potential energy efficiency resulting from sparse, event-driven computations.
1 code implementation • CVPR 2020 • Bing Han, Gopalakrishnan Srinivasan, Kaushik Roy
We find that performance degradation in the converted SNN stems from using "hard reset" spiking neuron that is driven to fixed reset potential once its membrane potential exceeds the firing threshold, leading to information loss during SNN inference.
no code implementations • 25 Feb 2020 • Sai Aparna Aketi, Priyadarshini Panda, Kaushik Roy
To address this issue, we propose an ensemble of classifiers at hidden layers to enable energy efficient detection of natural errors.
1 code implementation • 23 Feb 2020 • Sai Aparna Aketi, Sourjya Roy, Anand Raghunathan, Kaushik Roy
To address all the above issues, we present a simple-yet-effective gradual channel pruning while training methodology using a novel data-driven metric referred to as feature relevance score.
1 code implementation • 23 Jan 2020 • Gobinda Saha, Isha Garg, Aayush Ankit, Kaushik Roy
A minimal number of extra dimensions required to explain the current task are added to the Core space and the remaining Residual is freed up for learning the next task.
no code implementations • 30 Oct 2019 • Priyadarshini Panda, Aparna Aketi, Kaushik Roy
Spiking Neural Networks (SNNs) may offer an energy-efficient alternative for implementing deep learning applications.
no code implementations • 29 Jun 2019 • Amogh Agrawal, Chankyu Lee, Kaushik Roy
We rank the DNN weights and kernels based on a sensitivity analysis, and re-arrange the columns such that the most sensitive kernels are mapped closer to the drivers, thereby minimizing the impact of errors on the overall accuracy.
Emerging Technologies
no code implementations • 11 Jun 2019 • Maryam Parsa, Aayush Ankit, Amirkoushyar Ziabari, Kaushik Roy
The ever increasing computational cost of Deep Neural Networks (DNN) and the demand for energy efficient hardware for DNN acceleration has made accuracy and hardware cost co-optimization for DNNs tremendously important, especially for edge devices.
1 code implementation • 4 Jun 2019 • Wachirawit Ponghiran, Gopalakrishnan Srinivasan, Kaushik Roy
We propose reinforcement learning on simple networks consisting of random connections of spiking neurons (both recurrent and feed-forward) that can learn complex tasks with very little trainable parameters.
1 code implementation • 4 Jun 2019 • Indranil Chakraborty, Deboleena Roy, Isha Garg, Aayush Ankit, Kaushik Roy
The `Internet of Things' has brought increased demand for AI-based edge computing in applications ranging from healthcare monitoring systems to autonomous vehicles.
no code implementations • 24 May 2019 • Deboleena Roy, Priyadarshini Panda, Kaushik Roy
The spiking autoencoders are benchmarked on MNIST and Fashion-MNIST and achieve very low reconstruction loss, comparable to ANNs.
no code implementations • 8 May 2019 • Priyadarshini Panda, Efstathia Soufleri, Kaushik Roy
We analyze the stability of recurrent networks, specifically, reservoir computing models during training by evaluating the eigenvalue spectra of the reservoir dynamics.
no code implementations • 7 May 2019 • Saima Sharmin, Priyadarshini Panda, Syed Shakib Sarwar, Chankyu Lee, Wachirawit Ponghiran, Kaushik Roy
In this work, we present, for the first time, a comprehensive analysis of the behavior of more bio-plausible networks, namely Spiking Neural Network (SNN) under state-of-the-art adversarial tests.
no code implementations • 15 Mar 2019 • Chankyu Lee, Syed Shakib Sarwar, Priyadarshini Panda, Gopalakrishnan Srinivasan, Kaushik Roy
Spiking Neural Networks (SNNs) have recently emerged as a prominent neural computing paradigm.
no code implementations • 11 Feb 2019 • Gopalakrishnan Srinivasan, Kaushik Roy
In addition, we introduce residual connections between the stacked convolutional layers to improve the hierarchical feature learning capability of deep SNNs.
no code implementations • 8 Feb 2019 • Priyadarshini Panda, Indranil Chakraborty, Kaushik Roy
Specifically, discretizing the input space (or allowed pixel levels from 256 values or 8-bit to 4 values or 2-bit) extensively improves the adversarial robustness of DLNs for a substantial range of perturbations for minimal loss in test accuracy.
no code implementations • 8 Feb 2019 • Jason M. Allred, Kaushik Roy
Stochastic gradient descent requires that training samples be drawn from a uniformly random distribution of the data.
no code implementations • 1 Feb 2019 • Indranil Chakraborty, Deboleena Roy, Aayush Ankit, Kaushik Roy
In this work, we propose extremely quantized hybrid network architectures with both binary and full-precision sections to emulate the classification performance of full-precision networks while ensuring significant energy efficiency and memory compression.
no code implementations • 29 Jan 2019 • Aayush Ankit, Izzat El Hajj, Sai Rahul Chalamalasetti, Geoffrey Ndu, Martin Foltin, R. Stanley Williams, Paolo Faraboschi, Wen-mei Hwu, John Paul Strachan, Kaushik Roy, Dejan S Milojicic
We also present the PUMA compiler which translates high-level code to PUMA ISA.
Emerging Technologies Hardware Architecture
no code implementations • 15 Dec 2018 • Isha Garg, Priyadarshini Panda, Kaushik Roy
We demonstrate the proposed methodology on AlexNet and VGG style networks on the CIFAR-10, CIFAR-100 and ImageNet datasets, and successfully achieve an optimized architecture with a reduction of up to 3. 8X and 9X in the number of operations and parameters respectively, while trading off less than 1% accuracy.
no code implementations • 28 Nov 2018 • Kha Gia Quach, Ngan Le, Chi Nhan Duong, Ibsa Jalata, Kaushik Roy, Khoa Luu
To demonstrate the robustness and effectiveness of each component in the proposed approach, three experiments were conducted: (i) evaluation on AffectNet database to benchmark the proposed EmoNet for recognizing facial expression; (ii) evaluation on EmotiW2018 to benchmark the proposed deep feature level fusion mechanism NVPF; and, (iii) examine the proposed TNVPF on an innovative Group-level Emotion on Crowd Videos (GECV) dataset composed of 627 videos collected from publicly available sources.
no code implementations • 31 Aug 2018 • Shubham Jain, Abhronil Sengupta, Kaushik Roy, Anand Raghunathan
We present RxNN, a fast and accurate simulation framework to evaluate large-scale DNNs on resistive crossbar systems.
1 code implementation • 5 Jul 2018 • Priyadarshini Panda, Kaushik Roy
We introduce a Noise-based prior Learning (NoL) approach for training neural networks that are intrinsically robust to adversarial attacks.
no code implementations • 1 Jul 2018 • Amogh Agrawal, Akhilesh Jaiswal, Deboleena Roy, Bing Han, Gopalakrishnan Srinivasan, Aayush Ankit, Kaushik Roy
In this paper, we demonstrate how deep binary networks can be accelerated in modified von-Neumann machines by enabling binary convolutions within the SRAM array.
Emerging Technologies
no code implementations • 13 Jun 2018 • Baibhab Chatterjee, Priyadarshini Panda, Shovan Maity, Ayan Biswas, Kaushik Roy, Shreyas Sen
In this work, we will analyze, compare and contrast existing neuron architectures with a proposed mixed-signal neuron (MS-N) in terms of performance, power and noise, thereby demonstrating the applicability of the proposed mixed-signal neuron for achieving extreme energy-efficiency in neuromorphic computing.
1 code implementation • 15 Feb 2018 • Deboleena Roy, Priyadarshini Panda, Kaushik Roy
Over the past decade, Deep Convolutional Neural Networks (DCNNs) have shown remarkable performance in most computer vision tasks.
1 code implementation • 7 Feb 2018 • Abhronil Sengupta, Yuting Ye, Robert Wang, Chiao Liu, Kaushik Roy
Over the past few years, Spiking Neural Networks (SNNs) have become popular as a possible pathway to enable low-power event-driven neuromorphic hardware.
no code implementations • 5 Jan 2018 • Soumya Ukil, Swarnendu Ghosh, Sk Md Obaidullah, K. C. Santosh, Kaushik Roy, Nibaran Das
These are then used to train different CNNs to select features.
no code implementations • 26 Dec 2017 • Priyadarshini Panda, Kaushik Roy
Anatomical studies demonstrate that brain reformats input information to generate reliable responses for performing computations.
no code implementations • 7 Dec 2017 • Syed Shakib Sarwar, Aayush Ankit, Kaushik Roy
We propose an efficient training methodology and incrementally growing DCNN to learn new tasks while sharing part of the base network.
no code implementations • 24 Oct 2017 • Baibhab Chatterjee, Priyadarshini Panda, Shovan Maity, Kaushik Roy, Shreyas Sen
This work presents the design and analysis of a mixed-signal neuron (MS-N) for convolutional neural networks (CNN) and compares its performance with a digital neuron (Dig-N) in terms of operating frequency, power and noise.
no code implementations • 12 Oct 2017 • Nitin Rathi, Priyadarshini Panda, Kaushik Roy
We present a sparse SNN topology where non-critical connections are pruned to reduce the network size and the remaining critical synapses are weight quantized to accommodate for limited conductance levels.
no code implementations • 26 Aug 2017 • Aayush Ankit, Abhronil Sengupta, Kaushik Roy
Implementation of Neuromorphic Systems using post Complementary Metal-Oxide-Semiconductor (CMOS) technology based Memristive Crossbar Array (MCA) has emerged as a promising solution to enable low-power acceleration of neural networks.
no code implementations • 19 May 2017 • Akhilesh Jaiswal, Amogh Agrawal, Priyadarshini Panda, Kaushik Roy
The basic building blocks of such neuromorphic systems are neurons and synapses.
no code implementations • 12 May 2017 • Syed Shakib Sarwar, Priyadarshini Panda, Kaushik Roy
This combination creates a balanced system that gives better training performance in terms of energy and time, compared to the standalone CNN (without any Gabor kernels), in exchange for tolerable accuracy degradation.
no code implementations • 22 Mar 2017 • Priyadarshini Panda, Jason M. Allred, Shriram Ramanathan, Kaushik Roy
Against this backdrop, we present a novel unsupervised learning mechanism ASP (Adaptive Synaptic Plasticity) for improved recognition with Spiking Neural Networks (SNNs) for real time on-line learning in a dynamic environment.
no code implementations • 10 Mar 2017 • Priyadarshini Panda, Gopalakrishnan Srinivasan, Kaushik Roy
Brain-inspired learning models attempt to mimic the cortical architecture and computations performed in the neurons and synapses constituting the human brain to achieve its efficiency in cognitive tasks.
no code implementations • 20 Feb 2017 • Aayush Ankit, Abhronil Sengupta, Priyadarshini Panda, Kaushik Roy
In this paper, we propose RESPARC - a reconfigurable and energy efficient architecture built-on Memristive Crossbar Arrays (MCA) for deep Spiking Neural Networks (SNNs).
1 code implementation • 29 Sep 2016 • Akhilesh Jaiswal, Sourjya Roy, Gopalakrishnan Srinivasan, Kaushik Roy
The efficiency of the human brain in performing classification tasks has attracted considerable research interest in brain-inspired neuromorphic computing.
no code implementations • 12 Sep 2016 • Priyadarshini Panda, Aayush Ankit, Parami Wijesinghe, Kaushik Roy
We evaluate our approach for a 12-object classification task on the Caltech101 dataset and 10-object task on CIFAR-10 dataset by constructing FALCON models on the NeuE platform in 45nm technology.
no code implementations • 1 Aug 2016 • Priyadarshini Panda, Kaushik Roy
A set of binary classifiers is organized on top of the learnt hierarchy to minimize the overall test-time complexity.
no code implementations • 27 Feb 2016 • Syed Shakib Sarwar, Swagath Venkataramani, Anand Raghunathan, Kaushik Roy
Multipliers consume most of the processing energy in the digital neurons, and thereby in the hardware implementations of artificial neural networks.
no code implementations • 27 Feb 2016 • Gopalakrishnan Srinivasan, Parami Wijesinghe, Syed Shakib Sarwar, Akhilesh Jaiswal, Kaushik Roy
Our analysis on a widely used digit recognition dataset indicates that the voltage can be scaled by 200mV from the nominal operating voltage (950mV) for practically no loss (less than 0. 5%) in accuracy (22nm predictive technology).
no code implementations • 3 Feb 2016 • Priyadarshini Panda, Kaushik Roy
We present a spike-based unsupervised regenerative learning scheme to train Spiking Deep Networks (SpikeCNN) for object recognition problems using biologically realistic leaky integrate-and-fire neurons.
no code implementations • 29 Sep 2015 • Priyadarshini Panda, Abhronil Sengupta, Kaushik Roy
Deep learning neural networks have emerged as one of the most powerful classification tools for vision related applications.
no code implementations • 29 Sep 2015 • Priyadarshini Panda, Swagath Venkataramani, Abhronil Sengupta, Anand Raghunathan, Kaushik Roy
We propose a 2-stage hierarchical classification framework, with increasing levels of complexity, wherein the first stage is trained to recognize the broad representative semantic features relevant to the object of interest.