no code implementations • 28 May 2023 • Abhiroop Bhattacharjee, Abhishek Moitra, Youngeun Kim, Yeshwanth Venkatesha, Priyadarshini Panda
In-Memory Computing (IMC) platforms such as analog crossbars are gaining focus as they facilitate the acceleration of low-precision Deep Neural Networks (DNNs) with high area- & compute-efficiencies.
no code implementations • 11 May 2023 • Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda
In this paper, we propose DC-NAS -- a divide-and-conquer approach that performs supernet-based Neural Architecture Search (NAS) in a federated system by systematically sampling the search space.
1 code implementation • 26 Nov 2022 • Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Anna Hambitzer, Priyadarshini Panda
After training, we observe that information becomes highly concentrated in earlier few timesteps, a phenomenon we refer to as temporal information concentration.
1 code implementation • 4 Jul 2022 • Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Ruokai Yin, Priyadarshini Panda
To scale up a pruning technique towards deep SNNs, we investigate Lottery Ticket Hypothesis (LTH) which states that dense networks contain smaller subnetworks (i. e., winning tickets) that achieve comparable performance to the dense networks.
no code implementations • 11 Apr 2022 • Abhiroop Bhattacharjee, Yeshwanth Venkatesha, Abhishek Moitra, Priyadarshini Panda
Recent years have seen a paradigm shift towards multi-task learning.
no code implementations • 24 Mar 2022 • Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Yuhang Li, Priyadarshini Panda
However, there is little attention towards additional challenges emerging when federated aggregation is performed in a continual learning system.
1 code implementation • 31 Jan 2022 • Youngeun Kim, Hyoungseob Park, Abhishek Moitra, Abhiroop Bhattacharjee, Yeshwanth Venkatesha, Priyadarshini Panda
Then, we measure the robustness of the coding techniques on two adversarial attack methods.
1 code implementation • 23 Jan 2022 • Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Priyadarshini Panda
Interestingly, SNASNet found by our search algorithm achieves higher performance with backward connections, demonstrating the importance of designing SNN architecture for suitably using temporal information.
1 code implementation • 11 Jun 2021 • Yeshwanth Venkatesha, Youngeun Kim, Leandros Tassiulas, Priyadarshini Panda
To validate the proposed federated learning framework, we experimentally evaluate the advantages of SNNs on various aspects of federated learning with CIFAR10 and CIFAR100 benchmarks.
no code implementations • 7 Apr 2021 • Youngeun Kim, Yeshwanth Venkatesha, Priyadarshini Panda
2) Class leakage is caused when class-related features can be reconstructed from network parameters.
no code implementations • 12 Jan 2021 • Karina Vasquez, Yeshwanth Venkatesha, Abhiroop Bhattacharjee, Abhishek Moitra, Priyadarshini Panda
Additionally, we find that integrating AD based quantization with AD based pruning (both conducted during training) yields up to ~198x and ~44x energy reductions for VGG19 and ResNet18 architectures respectively on PIM platform compared to baseline 16-bit precision, unpruned models.
no code implementations • 7 Feb 2020 • Timothy Foldy-Porto, Yeshwanth Venkatesha, Priyadarshini Panda
Neural network pruning with suitable retraining can yield networks with considerably fewer parameters than the original with comparable degrees of accuracy.