Search Results for author: Youngeun Kim

Found 39 papers, 16 papers with code

One-stage Prompt-based Continual Learning

no code implementations25 Feb 2024 Youngeun Kim, Yuhang Li, Priyadarshini Panda

With the QR loss, our approach maintains ~ 50% computational cost reduction during inference as well as outperforms the prior two-stage PCL methods by ~1. 4% on public class-incremental continual learning benchmarks including CIFAR-100, ImageNet-R, and DomainNet.

Continual Learning

TT-SNN: Tensor Train Decomposition for Efficient Spiking Neural Network Training

no code implementations15 Jan 2024 DongHyun Lee, Ruokai Yin, Youngeun Kim, Abhishek Moitra, Yuhang Li, Priyadarshini Panda

Spiking Neural Networks (SNNs) have gained significant attention as a potentially energy-efficient alternative for standard neural networks with their sparse binary activation.

Tensor Decomposition

GenQ: Quantization in Low Data Regimes with Generative Synthetic Data

no code implementations7 Dec 2023 Yuhang Li, Youngeun Kim, DongHyun Lee, Souvik Kundu, Priyadarshini Panda

In the realm of deep neural network deployment, low-bit quantization presents a promising avenue for enhancing computational efficiency.

Computational Efficiency Quantization +1

Rethinking Skip Connections in Spiking Neural Networks with Time-To-First-Spike Coding

no code implementations1 Dec 2023 Youngeun Kim, Adar Kahana, Ruokai Yin, Yuhang Li, Panos Stinis, George Em Karniadakis, Priyadarshini Panda

In this work, we delve into the role of skip connections, a widely used concept in Artificial Neural Networks (ANNs), within the domain of SNNs with TTFS coding.

RobustEdge: Low Power Adversarial Detection for Cloud-Edge Systems

no code implementations5 Sep 2023 Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda

However, in prior detection works, the detector is attached to the classifier model and both detector and classifier work in tandem to perform adversarial detection that requires a high computational overhead which is not available at the low-power edge.

Adversarial Robustness Quantization

Artificial to Spiking Neural Networks Conversion for Scientific Machine Learning

no code implementations31 Aug 2023 Qian Zhang, Chenxi Wu, Adar Kahana, Youngeun Kim, Yuhang Li, George Em Karniadakis, Priyadarshini Panda

We introduce a method to convert Physics-Informed Neural Networks (PINNs), commonly used in scientific machine learning, to Spiking Neural Networks (SNNs), which are expected to have higher energy efficiency compared to traditional Artificial Neural Networks (ANNs).

Computational Efficiency

Examining the Role and Limits of Batchnorm Optimization to Mitigate Diverse Hardware-noise in In-memory Computing

no code implementations28 May 2023 Abhiroop Bhattacharjee, Abhishek Moitra, Youngeun Kim, Yeshwanth Venkatesha, Priyadarshini Panda

In-Memory Computing (IMC) platforms such as analog crossbars are gaining focus as they facilitate the acceleration of low-precision Deep Neural Networks (DNNs) with high area- & compute-efficiencies.

Quantization

Sharing Leaky-Integrate-and-Fire Neurons for Memory-Efficient Spiking Neural Networks

no code implementations26 May 2023 Youngeun Kim, Yuhang Li, Abhishek Moitra, Ruokai Yin, Priyadarshini Panda

Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation.

Human Activity Recognition

Do We Really Need a Large Number of Visual Prompts?

no code implementations26 May 2023 Youngeun Kim, Yuhang Li, Abhishek Moitra, Priyadarshini Panda

Due to increasing interest in adapting models on resource-constrained edges, parameter-efficient transfer learning has been widely explored.

Transfer Learning Visual Prompt Tuning

Divide-and-Conquer the NAS puzzle in Resource Constrained Federated Learning Systems

no code implementations11 May 2023 Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda

In this paper, we propose DC-NAS -- a divide-and-conquer approach that performs supernet-based Neural Architecture Search (NAS) in a federated system by systematically sampling the search space.

Federated Learning Neural Architecture Search +1

Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient

1 code implementation25 Apr 2023 Yuhang Li, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda

However, some essential questions exist pertaining to SNNs that are little studied: Do SNNs trained with surrogate gradient learn different representations from traditional Artificial Neural Networks (ANNs)?

SEENN: Towards Temporal Spiking Early-Exit Neural Networks

1 code implementation2 Apr 2023 Yuhang Li, Tamar Geller, Youngeun Kim, Priyadarshini Panda

However, we observe that the information capacity in SNNs is affected by the number of timesteps, leading to an accuracy-efficiency tradeoff.

XPert: Peripheral Circuit & Neural Architecture Co-search for Area and Energy-efficient Xbar-based Computing

1 code implementation30 Mar 2023 Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda

The hardware-efficiency and accuracy of Deep Neural Networks (DNNs) implemented on In-memory Computing (IMC) architectures primarily depend on the DNN architecture and the peripheral circuit parameters.

Workload-Balanced Pruning for Sparse Spiking Neural Networks

no code implementations13 Feb 2023 Ruokai Yin, Youngeun Kim, Yuhang Li, Abhishek Moitra, Nitin Satpute, Anna Hambitzer, Priyadarshini Panda

Though the existing pruning methods can provide extremely high weight sparsity for deep SNNs, the high weight sparsity brings a workload imbalance problem.

Exploring Temporal Information Dynamics in Spiking Neural Networks

1 code implementation26 Nov 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Anna Hambitzer, Priyadarshini Panda

After training, we observe that information becomes highly concentrated in earlier few timesteps, a phenomenon we refer to as temporal information concentration.

Exploring Lottery Ticket Hypothesis in Spiking Neural Networks

1 code implementation4 Jul 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Ruokai Yin, Priyadarshini Panda

To scale up a pruning technique towards deep SNNs, we investigate Lottery Ticket Hypothesis (LTH) which states that dense networks contain smaller subnetworks (i. e., winning tickets) that achieve comparable performance to the dense networks.

Examining the Robustness of Spiking Neural Networks on Non-ideal Memristive Crossbars

no code implementations20 Jun 2022 Abhiroop Bhattacharjee, Youngeun Kim, Abhishek Moitra, Priyadarshini Panda

Spiking Neural Networks (SNNs) have recently emerged as the low-power alternative to Artificial Neural Networks (ANNs) owing to their asynchronous, sparse, and binary information processing.

Loss-based Sequential Learning for Active Domain Adaptation

no code implementations25 Apr 2022 Kyeongtak Han, Youngeun Kim, Dongyoon Han, Sungeun Hong

To solve these, we fully utilize pseudo labels of the unlabeled target domain by leveraging loss prediction.

Domain Adaptation

SATA: Sparsity-Aware Training Accelerator for Spiking Neural Networks

1 code implementation11 Apr 2022 Ruokai Yin, Abhishek Moitra, Abhiroop Bhattacharjee, Youngeun Kim, Priyadarshini Panda

Based on SATA, we show quantitative analyses of the energy efficiency of SNN training and compare the training cost of SNNs and ANNs.

Total Energy

Addressing Client Drift in Federated Continual Learning with Adaptive Optimization

no code implementations24 Mar 2022 Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Yuhang Li, Priyadarshini Panda

However, there is little attention towards additional challenges emerging when federated aggregation is performed in a continual learning system.

Continual Learning Federated Learning +1

Neuromorphic Data Augmentation for Training Spiking Neural Networks

1 code implementation11 Mar 2022 Yuhang Li, Youngeun Kim, Hyoungseob Park, Tamar Geller, Priyadarshini Panda

In an effort to minimize this generalization gap, we propose Neuromorphic Data Augmentation (NDA), a family of geometric augmentations specifically designed for event-based datasets with the goal of significantly stabilizing the SNN training and reducing the generalization gap between training and test performance.

 Ranked #1 on Event data classification on CIFAR10-DVS (using extra training data)

Contrastive Learning Data Augmentation +1

Adversarial Detection without Model Information

1 code implementation9 Feb 2022 Abhishek Moitra, Youngeun Kim, Priyadarshini Panda

We train a standalone detector independent of the classifier model, with a layer-wise energy separation (LES) training to increase the separation between natural and adversarial energies.

Neural Architecture Search for Spiking Neural Networks

1 code implementation23 Jan 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Priyadarshini Panda

Interestingly, SNASNet found by our search algorithm achieves higher performance with backward connections, demonstrating the importance of designing SNN architecture for suitably using temporal information.

Neural Architecture Search

Gradient-based Bit Encoding Optimization for Noise-Robust Binary Memristive Crossbar

no code implementations5 Jan 2022 Youngeun Kim, Hyunsoo Kim, Seijoon Kim, Sang Joon Kim, Priyadarshini Panda

In addition, we propose Gradient-based Bit Encoding Optimization (GBO) which optimizes a different number of pulses at each layer, based on our in-depth analysis that each layer has a different level of noise sensitivity.

Beyond Classification: Directly Training Spiking Neural Networks for Semantic Segmentation

no code implementations14 Oct 2021 Youngeun Kim, Joshua Chough, Priyadarshini Panda

Specifically, we first investigate two representative SNN optimization techniques for recognition tasks (i. e., ANN-SNN conversion and surrogate gradient learning) on semantic segmentation datasets.

Autonomous Vehicles Classification +2

Federated Learning with Spiking Neural Networks

1 code implementation11 Jun 2021 Yeshwanth Venkatesha, Youngeun Kim, Leandros Tassiulas, Priyadarshini Panda

To validate the proposed federated learning framework, we experimentally evaluate the advantages of SNNs on various aspects of federated learning with CIFAR10 and CIFAR100 benchmarks.

Federated Learning Privacy Preserving

PrivateSNN: Privacy-Preserving Spiking Neural Networks

no code implementations7 Apr 2021 Youngeun Kim, Yeshwanth Venkatesha, Priyadarshini Panda

2) Class leakage is caused when class-related features can be reconstructed from network parameters.

Privacy Preserving

Visual Explanations from Spiking Neural Networks using Interspike Intervals

no code implementations26 Mar 2021 Youngeun Kim, Priyadarshini Panda

Spiking Neural Networks (SNNs) compute and communicate with asynchronous binary temporal events that can lead to significant energy savings with neuromorphic hardware.

Revisiting Batch Normalization for Training Low-latency Deep Spiking Neural Networks from Scratch

1 code implementation5 Oct 2020 Youngeun Kim, Priyadarshini Panda

Different from previous works, our proposed BNTT decouples the parameters in a BNTT layer along the time axis to capture the temporal dynamics of spikes.

Associative Partial Domain Adaptation

no code implementations7 Aug 2020 Youngeun Kim, Sungeun Hong, Seunghan Yang, Sungil Kang, Yunho Jeon, Jiwon Kim

Our Associative Partial Domain Adaptation (APDA) utilizes intra-domain association to actively select out non-trivial anomaly samples in each source-private class that sample-level weighting cannot handle.

Partial Domain Adaptation

Domain Adaptation without Source Data

3 code implementations3 Jul 2020 Youngeun Kim, Donghyeon Cho, Kyeongtak Han, Priyadarshini Panda, Sungeun Hong

Our key idea is to leverage a pre-trained model from the source domain and progressively update the target model in a self-learning manner.

Attribute Domain Adaptation +1

Partial Domain Adaptation Using Graph Convolutional Networks

no code implementations16 May 2020 Seunghan Yang, Youngeun Kim, Dongki Jung, Changick Kim

Although existing partial domain adaptation methods effectively down-weigh outliers' importance, they do not consider data structure of each domain and do not directly align the feature distributions of the same class in the source and target domains, which may lead to misalignment of category-level distributions.

Partial Domain Adaptation

Learning to Align Multi-Camera Domains using Part-Aware Clustering for Unsupervised Video Person Re-Identification

no code implementations29 Sep 2019 Youngeun Kim, Seokeon Choi, Taekyung Kim, Sumin Lee, Changick Kim

Since the cost of labeling increases dramatically as the number of cameras increases, it is difficult to apply the re-identification algorithm to a large camera network.

Clustering Metric Learning +2

RPM-Net: Robust Pixel-Level Matching Networks for Self-Supervised Video Object Segmentation

no code implementations29 Sep 2019 Youngeun Kim, Seokeon Choi, Hankyeol Lee, Taekyung Kim, Changick Kim

In this paper, we introduce a self-supervised approach for video object segmentation without human labeled data. Specifically, we present Robust Pixel-level Matching Net-works (RPM-Net), a novel deep architecture that matches pixels between adjacent frames, using only color information from unlabeled videos for training.

Object Segmentation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.