Search Results for author: Yeshwanth Venkatesha

Found 12 papers, 5 papers with code

Examining the Role and Limits of Batchnorm Optimization to Mitigate Diverse Hardware-noise in In-memory Computing

no code implementations28 May 2023 Abhiroop Bhattacharjee, Abhishek Moitra, Youngeun Kim, Yeshwanth Venkatesha, Priyadarshini Panda

In-Memory Computing (IMC) platforms such as analog crossbars are gaining focus as they facilitate the acceleration of low-precision Deep Neural Networks (DNNs) with high area- & compute-efficiencies.

Quantization

Divide-and-Conquer the NAS puzzle in Resource Constrained Federated Learning Systems

no code implementations11 May 2023 Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda

In this paper, we propose DC-NAS -- a divide-and-conquer approach that performs supernet-based Neural Architecture Search (NAS) in a federated system by systematically sampling the search space.

Federated Learning Neural Architecture Search +1

Exploring Temporal Information Dynamics in Spiking Neural Networks

1 code implementation26 Nov 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Anna Hambitzer, Priyadarshini Panda

After training, we observe that information becomes highly concentrated in earlier few timesteps, a phenomenon we refer to as temporal information concentration.

Exploring Lottery Ticket Hypothesis in Spiking Neural Networks

1 code implementation4 Jul 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Ruokai Yin, Priyadarshini Panda

To scale up a pruning technique towards deep SNNs, we investigate Lottery Ticket Hypothesis (LTH) which states that dense networks contain smaller subnetworks (i. e., winning tickets) that achieve comparable performance to the dense networks.

Addressing Client Drift in Federated Continual Learning with Adaptive Optimization

no code implementations24 Mar 2022 Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Yuhang Li, Priyadarshini Panda

However, there is little attention towards additional challenges emerging when federated aggregation is performed in a continual learning system.

Continual Learning Federated Learning +1

Neural Architecture Search for Spiking Neural Networks

1 code implementation23 Jan 2022 Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Priyadarshini Panda

Interestingly, SNASNet found by our search algorithm achieves higher performance with backward connections, demonstrating the importance of designing SNN architecture for suitably using temporal information.

Neural Architecture Search

Federated Learning with Spiking Neural Networks

1 code implementation11 Jun 2021 Yeshwanth Venkatesha, Youngeun Kim, Leandros Tassiulas, Priyadarshini Panda

To validate the proposed federated learning framework, we experimentally evaluate the advantages of SNNs on various aspects of federated learning with CIFAR10 and CIFAR100 benchmarks.

Federated Learning Privacy Preserving

PrivateSNN: Privacy-Preserving Spiking Neural Networks

no code implementations7 Apr 2021 Youngeun Kim, Yeshwanth Venkatesha, Priyadarshini Panda

2) Class leakage is caused when class-related features can be reconstructed from network parameters.

Privacy Preserving

Activation Density based Mixed-Precision Quantization for Energy Efficient Neural Networks

no code implementations12 Jan 2021 Karina Vasquez, Yeshwanth Venkatesha, Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda

Additionally, we find that integrating AD based quantization with AD based pruning (both conducted during training) yields up to ~198x and ~44x energy reductions for VGG19 and ResNet18 architectures respectively on PIM platform compared to baseline 16-bit precision, unpruned models.

Model Compression Quantization

Activation Density driven Energy-Efficient Pruning in Training

no code implementations7 Feb 2020 Timothy Foldy-Porto, Yeshwanth Venkatesha, Priyadarshini Panda

Neural network pruning with suitable retraining can yield networks with considerably fewer parameters than the original with comparable degrees of accuracy.

Network Pruning

Cannot find the paper you are looking for? You can Submit a new open access paper.