no code implementations • 14 Apr 2024 • Rachmad Vidya Wicaksana Putra, Alberto Marchisio, Muhammad Shafique
The experimental results show that our proposed framework can maintain high accuracy (i. e., 84. 12% accuracy) with 68. 75% memory saving, 3. 58x speed-up, and 4. 03x energy efficiency improvement as compared to the state-of-the-art work for NCARS dataset, thereby enabling energy-efficient embodied SNN deployments for autonomous agents.
no code implementations • 4 Apr 2024 • Iqra Bano, Rachmad Vidya Wicaksana Putra, Alberto Marchisio, Muhammad Shafique
Toward this, we propose a novel methodology to systematically study and analyze the impact of SNN parameters considering event-based automotive data, then leverage this analysis for enhancing SNN developments.
no code implementations • 4 Apr 2024 • Rachmad Vidya Wicaksana Putra, Alberto Marchisio, Fakhreddine Zayer, Jorge Dias, Muhammad Shafique
Toward this, recent advances in neuromorphic computing with Spiking Neural Networks (SNN) have demonstrated the potential to enable the embodied intelligence for robotics through bio-plausible computing paradigm that mimics how the biological brain works, known as "neuromorphic artificial intelligence (AI)".
no code implementations • 2 Apr 2024 • Rachmad Vidya Wicaksana Putra, Muhammad Shafique
Spiking Neural Networks (SNNs) can offer ultra low power/ energy consumption for machine learning-based applications due to their sparse spike-based operations.
no code implementations • 17 Feb 2024 • Rachmad Vidya Wicaksana Putra, Muhammad Shafique
Autonomous mobile agents (e. g., UAVs and UGVs) are typically expected to incur low power/energy consumption for solving machine learning tasks (such as object recognition), as these mobile agents are usually powered by portable batteries.
no code implementations • 8 Apr 2023 • Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique
Our FAM technique leverages the fault map of SNN compute engine for (i) minimizing weight corruption when mapping weight bits on the faulty memory cells, and (ii) selectively employing faulty neurons that do not cause significant accuracy degradation to maintain accuracy and throughput, while considering the SNN operations and processing dataflow.
no code implementations • 8 Apr 2023 • Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique
The key mechanisms of our EnforceSNN are: (1) employing quantized weights to reduce the DRAM access energy; (2) devising an efficient DRAM mapping policy to minimize the DRAM energy-per-access; (3) analyzing the SNN error tolerance to understand its accuracy profile considering different bit error rate (BER) values; (4) leveraging the information for developing an efficient fault-aware training (FAT) that considers different BER values and bit error locations in DRAM to improve the SNN error tolerance; and (5) developing an algorithm to select the SNN model that offers good trade-offs among accuracy, memory, and energy consumption.
no code implementations • 3 Mar 2023 • Rachmad Vidya Wicaksana Putra, Muhammad Shafique
These requirements can be fulfilled by Spiking Neural Networks (SNNs) as they offer low power/energy processing due to their sparse computations and efficient online learning with bio-inspired learning mechanisms for adapting to different environments.
no code implementations • 24 Dec 2022 • Rachmad Vidya Wicaksana Putra, Muhammad Shafique
Towards this, we propose a Mantis methodology to systematically employ SNNs on autonomous mobile agents to enable energy-efficient processing and adaptive capabilities in dynamic environments.
no code implementations • 17 Jun 2022 • Rachmad Vidya Wicaksana Putra, Muhammad Shafique
Larger Spiking Neural Network (SNN) models are typically favorable as they can offer higher accuracy.
no code implementations • 24 May 2022 • Rachmad Vidya Wicaksana Putra, Muhammad Shafique
Our lpSpikeCon methodology employs the following key steps: (1) analyzing the impacts of training the SNN model under unsupervised continual learning settings with reduced weight precision on the inference accuracy; (2) leveraging this study to identify SNN parameters that have a significant impact on the inference accuracy; and (3) developing an algorithm for searching the respective SNN parameter values that improve the quality of unsupervised continual learning.
no code implementations • 10 Mar 2022 • Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique
These errors can change the weight values and neuron operations in the compute engine of SNN accelerators, thereby leading to incorrect outputs and accuracy degradation.
no code implementations • 20 Sep 2021 • Muhammad Shafique, Alberto Marchisio, Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif
Afterward, we discuss how to further improve the performance (latency) and the energy efficiency of Edge AI systems through HW/SW-level optimizations, such as pruning, quantization, and approximation.
no code implementations • 23 Aug 2021 • Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique
Since recent works still focus on the fault-modeling and random fault injection in SNNs, the impact of memory faults in SNN hardware architectures on accuracy and the respective fault-mitigation techniques are not thoroughly explored.
no code implementations • 5 Jul 2021 • Rachmad Vidya Wicaksana Putra, Muhammad Shafique
A prominent technique for reducing the memory footprint of Spiking Neural Networks (SNNs) without decreasing the accuracy significantly is quantization.
no code implementations • 28 Feb 2021 • Rachmad Vidya Wicaksana Putra, Muhammad Shafique
Spiking Neural Networks (SNNs) bear the potential of efficient unsupervised and continual learning capabilities because of their biological plausibility, but their complexity still poses a serious research challenge to enable their energy-efficient design for resource-constrained scenarios (like embedded systems, IoT-Edge, etc.).
no code implementations • 28 Feb 2021 • Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique
The key mechanisms of SparkXD are: (1) improving the SNN error tolerance through fault-aware training that considers bit errors from approximate DRAM, (2) analyzing the error tolerance of the improved SNN model to find the maximum tolerable bit error rate (BER) that meets the targeted accuracy constraint, and (3) energy-efficient DRAM data mapping for the resilient SNN model that maps the weights in the appropriate DRAM location to minimize the DRAM access energy.
no code implementations • 17 Jul 2020 • Rachmad Vidya Wicaksana Putra, Muhammad Shafique
FSpiNN reduces the computational requirements by reducing the number of neuronal operations, the STDP-based synaptic weight updates, and the STDP complexity.
no code implementations • 21 Apr 2020 • Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique
Many convolutional neural network (CNN) accelerators face performance- and energy-efficiency challenges which are crucial for embedded implementations, due to high DRAM access latency and energy.
no code implementations • 4 Feb 2019 • Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique
Our experimental results show that the ROMANet saves DRAM access energy by 12% for the AlexNet, by 36% for the VGG-16, and by 46% for the MobileNet, while also improving the DRAM throughput by 10%, as compared to the state-of-the-art.
no code implementations • 30 Oct 2018 • Muhammad Abdullah Hanif, Rachmad Vidya Wicaksana Putra, Muhammad Tanvir, Rehan Hafiz, Semeen Rehman, Muhammad Shafique
The state-of-the-art accelerators for Convolutional Neural Networks (CNNs) typically focus on accelerating only the convolutional layers, but do not prioritize the fully-connected layers much.