no code implementations • 25 Feb 2024 • Youngeun Kim, Yuhang Li, Priyadarshini Panda
With the QR loss, our approach maintains ~ 50% computational cost reduction during inference as well as outperforms the prior two-stage PCL methods by ~1. 4% on public class-incremental continual learning benchmarks including CIFAR-100, ImageNet-R, and DomainNet.
no code implementations • 15 Jan 2024 • DongHyun Lee, Ruokai Yin, Youngeun Kim, Abhishek Moitra, Yuhang Li, Priyadarshini Panda
Spiking Neural Networks (SNNs) have gained significant attention as a potentially energy-efficient alternative for standard neural networks with their sparse binary activation.
no code implementations • 7 Dec 2023 • Yuhang Li, Youngeun Kim, DongHyun Lee, Souvik Kundu, Priyadarshini Panda
In the realm of deep neural network deployment, low-bit quantization presents a promising avenue for enhancing computational efficiency.
no code implementations • 1 Dec 2023 • Youngeun Kim, Adar Kahana, Ruokai Yin, Yuhang Li, Panos Stinis, George Em Karniadakis, Priyadarshini Panda
In this work, we delve into the role of skip connections, a widely used concept in Artificial Neural Networks (ANNs), within the domain of SNNs with TTFS coding.
no code implementations • 5 Sep 2023 • Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda
However, in prior detection works, the detector is attached to the classifier model and both detector and classifier work in tandem to perform adversarial detection that requires a high computational overhead which is not available at the low-power edge.
no code implementations • 31 Aug 2023 • Qian Zhang, Chenxi Wu, Adar Kahana, Youngeun Kim, Yuhang Li, George Em Karniadakis, Priyadarshini Panda
We introduce a method to convert Physics-Informed Neural Networks (PINNs), commonly used in scientific machine learning, to Spiking Neural Networks (SNNs), which are expected to have higher energy efficiency compared to traditional Artificial Neural Networks (ANNs).
no code implementations • 28 May 2023 • Abhiroop Bhattacharjee, Abhishek Moitra, Youngeun Kim, Yeshwanth Venkatesha, Priyadarshini Panda
In-Memory Computing (IMC) platforms such as analog crossbars are gaining focus as they facilitate the acceleration of low-precision Deep Neural Networks (DNNs) with high area- & compute-efficiencies.
no code implementations • 26 May 2023 • Youngeun Kim, Yuhang Li, Abhishek Moitra, Ruokai Yin, Priyadarshini Panda
Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation.
no code implementations • 26 May 2023 • Youngeun Kim, Yuhang Li, Abhishek Moitra, Priyadarshini Panda
Due to increasing interest in adapting models on resource-constrained edges, parameter-efficient transfer learning has been widely explored.
no code implementations • 11 May 2023 • Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda
In this paper, we propose DC-NAS -- a divide-and-conquer approach that performs supernet-based Neural Architecture Search (NAS) in a federated system by systematically sampling the search space.
1 code implementation • 25 Apr 2023 • Yuhang Li, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda
However, some essential questions exist pertaining to SNNs that are little studied: Do SNNs trained with surrogate gradient learn different representations from traditional Artificial Neural Networks (ANNs)?
1 code implementation • 2 Apr 2023 • Yuhang Li, Tamar Geller, Youngeun Kim, Priyadarshini Panda
However, we observe that the information capacity in SNNs is affected by the number of timesteps, leading to an accuracy-efficiency tradeoff.
1 code implementation • 30 Mar 2023 • Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda
The hardware-efficiency and accuracy of Deep Neural Networks (DNNs) implemented on In-memory Computing (IMC) architectures primarily depend on the DNN architecture and the peripheral circuit parameters.
no code implementations • 13 Feb 2023 • Ruokai Yin, Youngeun Kim, Yuhang Li, Abhishek Moitra, Nitin Satpute, Anna Hambitzer, Priyadarshini Panda
Though the existing pruning methods can provide extremely high weight sparsity for deep SNNs, the high weight sparsity brings a workload imbalance problem.
1 code implementation • 26 Nov 2022 • Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Anna Hambitzer, Priyadarshini Panda
After training, we observe that information becomes highly concentrated in earlier few timesteps, a phenomenon we refer to as temporal information concentration.
1 code implementation • 14 Nov 2022 • Yuhang Li, Ruokai Yin, Hyoungseob Park, Youngeun Kim, Priyadarshini Panda
SNNs allow spatio-temporal extraction of features and enjoy low-power computation with binary spikes.
1 code implementation • 4 Jul 2022 • Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Ruokai Yin, Priyadarshini Panda
To scale up a pruning technique towards deep SNNs, we investigate Lottery Ticket Hypothesis (LTH) which states that dense networks contain smaller subnetworks (i. e., winning tickets) that achieve comparable performance to the dense networks.
no code implementations • 20 Jun 2022 • Abhiroop Bhattacharjee, Youngeun Kim, Abhishek Moitra, Priyadarshini Panda
Spiking Neural Networks (SNNs) have recently emerged as the low-power alternative to Artificial Neural Networks (ANNs) owing to their asynchronous, sparse, and binary information processing.
no code implementations • 25 Apr 2022 • Kyeongtak Han, Youngeun Kim, Dongyoon Han, Sungeun Hong
To solve these, we fully utilize pseudo labels of the unlabeled target domain by leveraging loss prediction.
1 code implementation • 11 Apr 2022 • Ruokai Yin, Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda
Based on SATA, we show quantitative analyses of the energy efficiency of SNN training and compare the training cost of SNNs and ANNs.
no code implementations • 24 Mar 2022 • Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Yuhang Li, Priyadarshini Panda
However, there is little attention towards additional challenges emerging when federated aggregation is performed in a continual learning system.
1 code implementation • 11 Mar 2022 • Yuhang Li, Youngeun Kim, Hyoungseob Park, Tamar Geller, Priyadarshini Panda
In an effort to minimize this generalization gap, we propose Neuromorphic Data Augmentation (NDA), a family of geometric augmentations specifically designed for event-based datasets with the goal of significantly stabilizing the SNN training and reducing the generalization gap between training and test performance.
Ranked #1 on Event data classification on CIFAR10-DVS (using extra training data)
1 code implementation • 9 Feb 2022 • Abhishek Moitra, Youngeun Kim, Priyadarshini Panda
We train a standalone detector independent of the classifier model, with a layer-wise energy separation (LES) training to increase the separation between natural and adversarial energies.
1 code implementation • 31 Jan 2022 • Youngeun Kim, Hyoungseob Park, Abhishek Moitra, Abhiroop Bhattacharjee, Yeshwanth Venkatesha, Priyadarshini Panda
Then, we measure the robustness of the coding techniques on two adversarial attack methods.
1 code implementation • 23 Jan 2022 • Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Priyadarshini Panda
Interestingly, SNASNet found by our search algorithm achieves higher performance with backward connections, demonstrating the importance of designing SNN architecture for suitably using temporal information.
no code implementations • 5 Jan 2022 • Youngeun Kim, Hyunsoo Kim, Seijoon Kim, Sang Joon Kim, Priyadarshini Panda
In addition, we propose Gradient-based Bit Encoding Optimization (GBO) which optimizes a different number of pulses at each layer, based on our in-depth analysis that each layer has a different level of noise sensitivity.
no code implementations • 14 Oct 2021 • Youngeun Kim, Joshua Chough, Priyadarshini Panda
Specifically, we first investigate two representative SNN optimization techniques for recognition tasks (i. e., ANN-SNN conversion and surrogate gradient learning) on semantic segmentation datasets.
1 code implementation • 11 Jun 2021 • Yeshwanth Venkatesha, Youngeun Kim, Leandros Tassiulas, Priyadarshini Panda
To validate the proposed federated learning framework, we experimentally evaluate the advantages of SNNs on various aspects of federated learning with CIFAR10 and CIFAR100 benchmarks.
no code implementations • 7 Apr 2021 • Youngeun Kim, Yeshwanth Venkatesha, Priyadarshini Panda
2) Class leakage is caused when class-related features can be reconstructed from network parameters.
no code implementations • 26 Mar 2021 • Youngeun Kim, Priyadarshini Panda
Spiking Neural Networks (SNNs) compute and communicate with asynchronous binary temporal events that can lead to significant energy savings with neuromorphic hardware.
1 code implementation • 5 Oct 2020 • Youngeun Kim, Priyadarshini Panda
Different from previous works, our proposed BNTT decouples the parameters in a BNTT layer along the time axis to capture the temporal dynamics of spikes.
no code implementations • 7 Aug 2020 • Youngeun Kim, Sungeun Hong, Seunghan Yang, Sungil Kang, Yunho Jeon, Jiwon Kim
Our Associative Partial Domain Adaptation (APDA) utilizes intra-domain association to actively select out non-trivial anomaly samples in each source-private class that sample-level weighting cannot handle.
3 code implementations • 3 Jul 2020 • Youngeun Kim, Donghyeon Cho, Kyeongtak Han, Priyadarshini Panda, Sungeun Hong
Our key idea is to leverage a pre-trained model from the source domain and progressively update the target model in a self-learning manner.
no code implementations • 16 May 2020 • Seunghan Yang, Youngeun Kim, Dongki Jung, Changick Kim
Although existing partial domain adaptation methods effectively down-weigh outliers' importance, they do not consider data structure of each domain and do not directly align the feature distributions of the same class in the source and target domains, which may lead to misalignment of category-level distributions.
1 code implementation • CVPR 2020 • Seokeon Choi, Sumin Lee, Youngeun Kim, Taekyung Kim, Changick Kim
To implement our approach, we introduce an ID-preserving person image generation network and a hierarchical feature learning module.
1 code implementation • 12 Oct 2019 • Seunghan Yang, Yoonhyung Kim, Youngeun Kim, Changick Kim
Most previous methods utilize the activation map corresponding to the highest activation source.
no code implementations • 2 Oct 2019 • Youngeun Kim, Seunghyeon Kim, Taekyung Kim, Changick Kim
Note that each binary image consists of background and regions belonging to a class.
no code implementations • 29 Sep 2019 • Youngeun Kim, Seokeon Choi, Taekyung Kim, Sumin Lee, Changick Kim
Since the cost of labeling increases dramatically as the number of cameras increases, it is difficult to apply the re-identification algorithm to a large camera network.
no code implementations • 29 Sep 2019 • Youngeun Kim, Seokeon Choi, Hankyeol Lee, Taekyung Kim, Changick Kim
In this paper, we introduce a self-supervised approach for video object segmentation without human labeled data. Specifically, we present Robust Pixel-level Matching Net-works (RPM-Net), a novel deep architecture that matches pixels between adjacent frames, using only color information from unlabeled videos for training.